Yu Sun, Hyojoon Bae, S. Manna, Jules White, M. G. Fard
{"title":"Bridging semantics with physical objects using augmented reality","authors":"Yu Sun, Hyojoon Bae, S. Manna, Jules White, M. G. Fard","doi":"10.1109/ICOSC.2015.7050832","DOIUrl":null,"url":null,"abstract":"Today's industry emphasize greatly on data-driven and data engineering technologies, triggering a tremendous amount of structured and unstructured data across different domains. As a result of which, semantic information is implicitly available in the knowledge base, mainly in the form of data descriptions, and needs to be extracted automatically to better serve the users' need. But how to deliver the data to the end-users in an effective and efficient way, has posed a new challenge, particularly in the context of big data and mobile computing. Traditional search-based approach may suffer from the degraded user experience or scalability. It is very essential to understand meaning (i.e., semantics) rather than pure keywords matching, that might lead to entirely spurious results of no relevance. In this paper, we present the usage of an Augmented Reality (AR) solution to bridge the existing semantic data and information with the real-world physical objects. The AR solution - HD4AR (Hybrid 4-Dimensional Augmented Reality) has been commercialized as a startup company to provide AR service to industry patterns to associate valuable semantic information with the objects in specific contexts, so that users can easily retrieve the data by snapping a photo and having the semantic information rendered on the photo accurately and quickly. Followed by a brief overview of the technology, we present a few use cases as well as the lessons learned from the industry collaboration experience.","PeriodicalId":126701,"journal":{"name":"Proceedings of the 2015 IEEE 9th International Conference on Semantic Computing (IEEE ICSC 2015)","volume":"80 7","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2015 IEEE 9th International Conference on Semantic Computing (IEEE ICSC 2015)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICOSC.2015.7050832","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Today's industry emphasize greatly on data-driven and data engineering technologies, triggering a tremendous amount of structured and unstructured data across different domains. As a result of which, semantic information is implicitly available in the knowledge base, mainly in the form of data descriptions, and needs to be extracted automatically to better serve the users' need. But how to deliver the data to the end-users in an effective and efficient way, has posed a new challenge, particularly in the context of big data and mobile computing. Traditional search-based approach may suffer from the degraded user experience or scalability. It is very essential to understand meaning (i.e., semantics) rather than pure keywords matching, that might lead to entirely spurious results of no relevance. In this paper, we present the usage of an Augmented Reality (AR) solution to bridge the existing semantic data and information with the real-world physical objects. The AR solution - HD4AR (Hybrid 4-Dimensional Augmented Reality) has been commercialized as a startup company to provide AR service to industry patterns to associate valuable semantic information with the objects in specific contexts, so that users can easily retrieve the data by snapping a photo and having the semantic information rendered on the photo accurately and quickly. Followed by a brief overview of the technology, we present a few use cases as well as the lessons learned from the industry collaboration experience.