首页 > 最新文献

2015 IEEE International Symposium on Mixed and Augmented Reality最新文献

英文 中文
[POSTER] An Adaptive Augmented Reality Interface for Hand Based on Probabilistic Approach 一种基于概率方法的手部自适应增强现实界面
Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.44
Jinki Jung, Hyeopwoo Lee, H. Yang
In this paper we propose an adaptive Augmented Reality interface for hand gestures based on a probabilistic model. The proposed method provides an in-situ interface and the corresponding functionalities by recognizing a context of hand shape and gesture which requires the accurate recognition of static and dynamic hand states. We present an appearance-based hand feature representation that yields robustness against hand shape variations, and a feature extraction method based on the fingertip likelihood from a GMM model. Experimental results show that both context-sensitivity and accurate hand gesture recognition are achieved throughout the quantitative evaluation and its implementation as a three-in-one virtual interface.
本文提出了一种基于概率模型的自适应增强现实手势界面。该方法通过识别手部形状和手势的上下文提供了一个原位界面和相应的功能,这需要准确识别手部的静态和动态状态。我们提出了一种基于外观的手部特征表示方法,该方法对手部形状变化具有鲁棒性,并提出了一种基于GMM模型的指尖似然的特征提取方法。实验结果表明,在定量评价和实现三合一虚拟界面的过程中,既实现了上下文敏感性,又实现了准确的手势识别。
{"title":"[POSTER] An Adaptive Augmented Reality Interface for Hand Based on Probabilistic Approach","authors":"Jinki Jung, Hyeopwoo Lee, H. Yang","doi":"10.1109/ISMAR.2015.44","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.44","url":null,"abstract":"In this paper we propose an adaptive Augmented Reality interface for hand gestures based on a probabilistic model. The proposed method provides an in-situ interface and the corresponding functionalities by recognizing a context of hand shape and gesture which requires the accurate recognition of static and dynamic hand states. We present an appearance-based hand feature representation that yields robustness against hand shape variations, and a feature extraction method based on the fingertip likelihood from a GMM model. Experimental results show that both context-sensitivity and accurate hand gesture recognition are achieved throughout the quantitative evaluation and its implementation as a three-in-one virtual interface.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115765970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
[POSTER] Marker Identification Using IR LEDs and RGB Color Descriptors [海报]利用红外发光二极管和RGB颜色描述符识别标记
Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.30
G. Koutaki, Shodai Hirata, Hiromu Sato, K. Uchimura
In optical motion capture systems, it is difficult to correctly recognize markers based on their unique identifiers (IDs) in a single frame. In this paper, we propose two types of light-emitting diodes (LEDs) and cameras, infrared (IR) and RGB, in order to correctly detect and identify all markers tracking objects in a given system. To detect and estimate the three-dimensional (3D) position of the marker, we measure IR LEDs using IR stereo cameras. Furthermore, in order to identify each marker, we calculate and compare the RGB color descriptor in the vicinity of its center. Our system consists of general IR and RGB cameras, and is easy to extend by increasing the number of cameras. We implemented an IR/RGB LED marker circuit and constructed a simple motion capture system to test the effectiveness of our system. The results show that our system can detect the 3D positions and unique IDs of markers in one frame.
在光学动作捕捉系统中,很难根据单个帧中的唯一标识符(id)正确识别标记。在本文中,我们提出了两种类型的发光二极管(led)和相机,红外(IR)和RGB,以正确地检测和识别所有标记跟踪对象在给定系统中。为了检测和估计标记的三维(3D)位置,我们使用红外立体摄像机测量红外led。此外,为了识别每个标记,我们计算并比较其中心附近的RGB颜色描述符。我们的系统由一般的红外和RGB相机组成,并且很容易通过增加相机的数量来扩展。我们实现了一个红外/RGB LED标记电路,并构建了一个简单的运动捕捉系统来测试我们系统的有效性。结果表明,该系统可以在一帧内检测到标记的三维位置和唯一id。
{"title":"[POSTER] Marker Identification Using IR LEDs and RGB Color Descriptors","authors":"G. Koutaki, Shodai Hirata, Hiromu Sato, K. Uchimura","doi":"10.1109/ISMAR.2015.30","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.30","url":null,"abstract":"In optical motion capture systems, it is difficult to correctly recognize markers based on their unique identifiers (IDs) in a single frame. In this paper, we propose two types of light-emitting diodes (LEDs) and cameras, infrared (IR) and RGB, in order to correctly detect and identify all markers tracking objects in a given system. To detect and estimate the three-dimensional (3D) position of the marker, we measure IR LEDs using IR stereo cameras. Furthermore, in order to identify each marker, we calculate and compare the RGB color descriptor in the vicinity of its center. Our system consists of general IR and RGB cameras, and is easy to extend by increasing the number of cameras. We implemented an IR/RGB LED marker circuit and constructed a simple motion capture system to test the effectiveness of our system. The results show that our system can detect the 3D positions and unique IDs of markers in one frame.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117354851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
[POSTER] Augmented Wire Routing Navigation for Wire Assembly [海报]用于导线装配的增强导线路由导航
Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.28
M. Rice, Hong Huei Tay, Jamie Ng, C. Lim, S. Selvaraj, Ellick Wu
Within modern manufacturing, digital solutions are needed to optimize and aid shop floor processes. This includes user-centered technologies that can be appropriately integrated into factory environments to assist in the efficiency of manufacturing tasks. In this paper, we present a dynamic system to support the electrical wiring assembly of commercial aircraft. Specifically, we describe the system design, which aims to improve the productivity of factory operators through the integration of wearable and mobile solutions. An evaluation of the augmented reality component of our system using a pair of smart glasses is reported with 12 participants, as we describe important interaction issues in the ongoing development of this work.
在现代制造业中,需要数字化解决方案来优化和辅助车间流程。这包括以用户为中心的技术,这些技术可以适当地集成到工厂环境中,以帮助提高制造任务的效率。本文提出了一种支持商用飞机电线装配的动态系统。具体来说,我们描述了系统设计,旨在通过可穿戴和移动解决方案的集成来提高工厂操作员的生产力。我们报告了12名参与者使用一副智能眼镜对我们系统的增强现实组件进行评估,因为我们描述了这项工作正在进行的开发中的重要交互问题。
{"title":"[POSTER] Augmented Wire Routing Navigation for Wire Assembly","authors":"M. Rice, Hong Huei Tay, Jamie Ng, C. Lim, S. Selvaraj, Ellick Wu","doi":"10.1109/ISMAR.2015.28","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.28","url":null,"abstract":"Within modern manufacturing, digital solutions are needed to optimize and aid shop floor processes. This includes user-centered technologies that can be appropriately integrated into factory environments to assist in the efficiency of manufacturing tasks. In this paper, we present a dynamic system to support the electrical wiring assembly of commercial aircraft. Specifically, we describe the system design, which aims to improve the productivity of factory operators through the integration of wearable and mobile solutions. An evaluation of the augmented reality component of our system using a pair of smart glasses is reported with 12 participants, as we describe important interaction issues in the ongoing development of this work.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123505863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Introducing Augmented Reality to Optical Coherence Tomography in Ophthalmic Microsurgery 在眼科显微外科光学相干断层扫描中引入增强现实技术
Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.15
Hessam Roodaki, K. Filippatos, A. Eslami, Nassir Navab
Augmented Reality (AR) in microscopic surgery has been subject of several studies in the past two decades. Nevertheless, AR has not found its way into everyday microsurgical workflows. The introduction of new surgical microscopes equipped with Optical Coherence Tomography (OCT) enables the surgeons to perform multimodal (optical and OCT) imaging in the operating room. Taking full advantage of such elaborate source of information requires sophisticated intraoperative image fusion, information extraction, guidance and visualization methods. Medical AR is a unique approach to facilitate utilization of multimodal medical imaging devices. Here we propose a novel medical AR solution to the long-known problem of determining the distance between the surgical instrument tip and the underlying tissue in ophthalmic surgery to further pave the way of AR into the surgical theater. Our method brings augmented reality to OCT for the first time by augmenting the surgeon's view of the OCT images with an estimated instrument cross-section shape and distance to the retinal surface using only information from the shadow of the instrument in intraoperative OCT images. We demonstrate the applicability of our method in retinal surgery using a phantom eye and evaluate the accuracy of the augmented information using a micromanipulator.
在过去的二十年里,增强现实(AR)在显微外科手术中的应用一直是许多研究的主题。然而,AR还没有进入日常的显微外科工作流程。配备光学相干断层扫描(OCT)的新型手术显微镜的引入使外科医生能够在手术室中进行多模态(光学和OCT)成像。充分利用这种复杂的信息源需要复杂的术中图像融合、信息提取、引导和可视化方法。医学增强现实是一种独特的方法,以促进利用多模态医学成像设备。在此,我们提出了一种新的医疗AR解决方案,以解决眼科手术中确定手术器械尖端与底层组织之间距离的长期问题,从而进一步为AR进入手术室铺平道路。我们的方法首次将增强现实技术引入OCT,通过仅使用术中OCT图像中器械阴影的信息,通过估计器械的横截面形状和到视网膜表面的距离来增强外科医生对OCT图像的看法。我们证明了我们的方法在视网膜手术中使用假眼的适用性,并使用微机械臂评估增强信息的准确性。
{"title":"Introducing Augmented Reality to Optical Coherence Tomography in Ophthalmic Microsurgery","authors":"Hessam Roodaki, K. Filippatos, A. Eslami, Nassir Navab","doi":"10.1109/ISMAR.2015.15","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.15","url":null,"abstract":"Augmented Reality (AR) in microscopic surgery has been subject of several studies in the past two decades. Nevertheless, AR has not found its way into everyday microsurgical workflows. The introduction of new surgical microscopes equipped with Optical Coherence Tomography (OCT) enables the surgeons to perform multimodal (optical and OCT) imaging in the operating room. Taking full advantage of such elaborate source of information requires sophisticated intraoperative image fusion, information extraction, guidance and visualization methods. Medical AR is a unique approach to facilitate utilization of multimodal medical imaging devices. Here we propose a novel medical AR solution to the long-known problem of determining the distance between the surgical instrument tip and the underlying tissue in ophthalmic surgery to further pave the way of AR into the surgical theater. Our method brings augmented reality to OCT for the first time by augmenting the surgeon's view of the OCT images with an estimated instrument cross-section shape and distance to the retinal surface using only information from the shadow of the instrument in intraoperative OCT images. We demonstrate the applicability of our method in retinal surgery using a phantom eye and evaluate the accuracy of the augmented information using a micromanipulator.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123643750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
[POSTER] Natural 3D Interaction Using a See-Through Mobile AR System [海报]使用透明移动AR系统的自然3D交互
Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.27
Yuko Unuma, T. Komuro
In this paper, we propose an interaction system in which the appearance of the image displayed on a mobile display is consistent with that of the real space and that enables a user to interact with virtual objects overlaid on the image using the user's hand. The three-dimensional scene obtained by a depth camera is projected according to the user's viewpoint position obtained by face tracking, and the see-through image whose appearance is consistent with that outside the mobile display is generated. Interaction with virtual objects is realized by using the depth information obtained by the depth camera. To move virtual objects as if they were in real space, virtual objects are rendered in the world coordinate system that is fixed to a real scene even if the mobile display moves, and the direction of gravitational force added to virtual objects is made consistent with that of the world coordinate system. The former is realized by using the ICP (Iterative Closest Point) algorithm and the latter is realized by using the information obtained by an accelerometer. Thus, natural interaction with virtual objects using the user's hand is realized.
在本文中,我们提出了一种交互系统,其中移动显示器上显示的图像外观与真实空间的外观一致,使用户能够使用用户的手与覆盖在图像上的虚拟物体进行交互。根据人脸跟踪获得的用户视点位置,将深度摄像头获得的三维场景进行投影,生成外观与移动显示屏外一致的透视图像。利用深度相机获取的深度信息实现与虚拟物体的交互。为了使虚拟物体像在真实空间中一样移动,即使移动显示器移动,也将虚拟物体呈现在固定于真实场景的世界坐标系中,并且使虚拟物体所加重力的方向与世界坐标系的方向一致。前者采用迭代最近点(ICP)算法实现,后者采用加速度计获取的信息实现。从而实现了用手与虚拟物体的自然交互。
{"title":"[POSTER] Natural 3D Interaction Using a See-Through Mobile AR System","authors":"Yuko Unuma, T. Komuro","doi":"10.1109/ISMAR.2015.27","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.27","url":null,"abstract":"In this paper, we propose an interaction system in which the appearance of the image displayed on a mobile display is consistent with that of the real space and that enables a user to interact with virtual objects overlaid on the image using the user's hand. The three-dimensional scene obtained by a depth camera is projected according to the user's viewpoint position obtained by face tracking, and the see-through image whose appearance is consistent with that outside the mobile display is generated. Interaction with virtual objects is realized by using the depth information obtained by the depth camera. To move virtual objects as if they were in real space, virtual objects are rendered in the world coordinate system that is fixed to a real scene even if the mobile display moves, and the direction of gravitational force added to virtual objects is made consistent with that of the world coordinate system. The former is realized by using the ICP (Iterative Closest Point) algorithm and the latter is realized by using the information obtained by an accelerometer. Thus, natural interaction with virtual objects using the user's hand is realized.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128126037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
[POSTER] On-site AR Interface with Web-Based 3D Archiving System for Archaeological Project [海报]考古项目现场AR界面与基于web的3D归档系统
Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.68
Ryosuke Matsushita, Tokihisa Higo, H. Suita, Y. Yasumuro
This paper proposes an AR (augmented reality) interface for on-site use in an archaeological project. We have already been developing a web-based 3D archiving system for supporting the diverse specialties and nationalities needed for carrying out the survey and restoration work of the archaeological project. Our 3D archiving system is designed for the spontaneous updating and sharing of information on findings in order to better enable frequent discussions, through a 3D virtual copy of the field site that a user can visit, explore, and embed information into, over the Internet. Here we present an AR user interface to enhance access from mobile devices at the actual site to the archiving system. Using SFM (structure from motion) and solving the Perspective-n-Point (PnP) problem, a photo taken at the site can be stably matched to the pre-registered photo sets in the archiving system and the archived information is automatically overlaid on the photo, just in the same manner as exploring the virtual version of the site on desktop PCs. Our implementation effectively works on an on-going project in Saqqara, Egypt.
本文提出了一种用于考古项目现场使用的AR(增强现实)界面。我们已经开发了一个基于网络的3D存档系统,以支持开展考古项目的调查和修复工作所需的不同专业和民族。我们的3D存档系统是为自发更新和分享发现的信息而设计的,以便更好地进行频繁的讨论,通过用户可以在互联网上访问、探索和嵌入信息的实地网站的3D虚拟副本。在这里,我们提出了一个增强现实用户界面,以增强从实际站点的移动设备到存档系统的访问。利用SFM (structure from motion)技术,通过解决PnP (Perspective-n-Point)问题,在现场拍摄的照片可以稳定地与存档系统中预先注册的照片集匹配,存档信息自动叠加在照片上,就像在台式电脑上探索虚拟的现场一样。我们的实施在埃及萨卡拉的一个正在进行的项目中有效地发挥了作用。
{"title":"[POSTER] On-site AR Interface with Web-Based 3D Archiving System for Archaeological Project","authors":"Ryosuke Matsushita, Tokihisa Higo, H. Suita, Y. Yasumuro","doi":"10.1109/ISMAR.2015.68","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.68","url":null,"abstract":"This paper proposes an AR (augmented reality) interface for on-site use in an archaeological project. We have already been developing a web-based 3D archiving system for supporting the diverse specialties and nationalities needed for carrying out the survey and restoration work of the archaeological project. Our 3D archiving system is designed for the spontaneous updating and sharing of information on findings in order to better enable frequent discussions, through a 3D virtual copy of the field site that a user can visit, explore, and embed information into, over the Internet. Here we present an AR user interface to enhance access from mobile devices at the actual site to the archiving system. Using SFM (structure from motion) and solving the Perspective-n-Point (PnP) problem, a photo taken at the site can be stably matched to the pre-registered photo sets in the archiving system and the archived information is automatically overlaid on the photo, just in the same manner as exploring the virtual version of the site on desktop PCs. Our implementation effectively works on an on-going project in Saqqara, Egypt.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121713015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
[POSTER] Authoring Tools in Augmented Reality: An Analysis and Classification of Content Design Tools 增强现实中的创作工具:内容设计工具的分析与分类
Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.47
Roberta Cabral Mota, R. Roberto, V. Teichrieb
Augmented Reality Authoring Tools are important instruments that can help a widespread use of AR. They can be classified as programming or content design tools in which the latter completely removes the necessity of programming skills to develop an AR solution. Several solutions have been developed in the past years, however there are few works aiming to identify patterns and general models for such tools. This work aims to perform a trend analysis on content design tools in order to identify their functionalities regarding AR, authoring paradigms, deployment strategies and general dataflow models. This work is aimed to assist developers willing to create authoring tools, therefore, it focus on the last three aspects. Thus, 19 tools were analyzed and through this evaluation it were identified two authoring paradigms and two deployment strategies. Moreover, from their combination it was possible to elaborate four generic dataflow models in which every tool could be fit into.
增强现实创作工具是可以帮助广泛使用AR的重要工具。它们可以分为编程或内容设计工具,其中后者完全消除了开发AR解决方案所需的编程技能。在过去的几年中已经开发了一些解决方案,但是很少有工作旨在为这些工具识别模式和通用模型。这项工作旨在对内容设计工具进行趋势分析,以确定它们在AR、创作范例、部署策略和一般数据流模型方面的功能。这项工作旨在帮助愿意创建创作工具的开发人员,因此,它主要关注最后三个方面。因此,我们分析了19个工具,并通过评估确定了两个创作范例和两个部署策略。此外,通过它们的组合,可以精心设计四种通用的数据流模型,每种工具都可以适用于这些模型。
{"title":"[POSTER] Authoring Tools in Augmented Reality: An Analysis and Classification of Content Design Tools","authors":"Roberta Cabral Mota, R. Roberto, V. Teichrieb","doi":"10.1109/ISMAR.2015.47","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.47","url":null,"abstract":"Augmented Reality Authoring Tools are important instruments that can help a widespread use of AR. They can be classified as programming or content design tools in which the latter completely removes the necessity of programming skills to develop an AR solution. Several solutions have been developed in the past years, however there are few works aiming to identify patterns and general models for such tools. This work aims to perform a trend analysis on content design tools in order to identify their functionalities regarding AR, authoring paradigms, deployment strategies and general dataflow models. This work is aimed to assist developers willing to create authoring tools, therefore, it focus on the last three aspects. Thus, 19 tools were analyzed and through this evaluation it were identified two authoring paradigms and two deployment strategies. Moreover, from their combination it was possible to elaborate four generic dataflow models in which every tool could be fit into.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131982832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
[POSTER] Road Maintenance MR System Using LRF and PDR [海报]基于LRF和PDR的道路养护MR系统
Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.66
Ching-Tzun Chang, Ryosuke Ichikari, Koji Makita, T. Okuma, T. Kurata
We have been developing a mixed reality system to support road maintenance using overlaid visual aids. Such a system requires a positioning method that can provide sub-meter accuracy and function even if the appearance of the road surface changes significantly caused by many factors such as construction phase, time and weather. Therefore, we are developing a real-time worker positioning method that can be applied to these situation by integrating laser range finder (LRF) and pedestrian dead-reckoning (PDR) data. In the field, multiple workers move around the workspace. Therefore, it is necessary to determine corresponding pairs of PDR-based and LRF-based trajectories by identifying similar trajectories. In this study, we propose a method to calculate the similarity between trajectories and a procedure to integrate corresponding pairs of trajectories to acquire the position and movement direction of a worker.
我们一直在开发一种混合现实系统,使用覆盖的视觉辅助设备来支持道路维护。这样的系统需要一种定位方法,即使由于施工阶段、时间和天气等多种因素导致路面外观发生重大变化,也能提供亚米级的精度和功能。因此,我们正在开发一种实时工作人员定位方法,该方法可以通过集成激光测距仪(LRF)和行人航位推算(PDR)数据来应用于这些情况。在现场,多个工作人员在工作空间中移动。因此,有必要通过识别相似轨迹来确定基于pdr和基于lrf的轨迹对应对。在这项研究中,我们提出了一种计算轨迹之间相似度的方法,并提出了一种对相应的轨迹对进行整合以获得工人位置和运动方向的方法。
{"title":"[POSTER] Road Maintenance MR System Using LRF and PDR","authors":"Ching-Tzun Chang, Ryosuke Ichikari, Koji Makita, T. Okuma, T. Kurata","doi":"10.1109/ISMAR.2015.66","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.66","url":null,"abstract":"We have been developing a mixed reality system to support road maintenance using overlaid visual aids. Such a system requires a positioning method that can provide sub-meter accuracy and function even if the appearance of the road surface changes significantly caused by many factors such as construction phase, time and weather. Therefore, we are developing a real-time worker positioning method that can be applied to these situation by integrating laser range finder (LRF) and pedestrian dead-reckoning (PDR) data. In the field, multiple workers move around the workspace. Therefore, it is necessary to determine corresponding pairs of PDR-based and LRF-based trajectories by identifying similar trajectories. In this study, we propose a method to calculate the similarity between trajectories and a procedure to integrate corresponding pairs of trajectories to acquire the position and movement direction of a worker.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131539238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
[POSTER] Natural User Interface for Ambient Objects [海报]环境对象的自然用户界面
Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.25
Meng Ma, Kevin Merckx, P. Fallavollita, Nassir Navab
To help the computing device always understand the spacial relationship between the user's gesture and the ambient objects, a methodology is proposed to find the user's virtual eye center in the wearable camera coordinate system and then calculate accurately where a user is pointing at to perform the natural interaction. First, the wearable RGB-D sensor is affixed around the user forehead. A tool-free calibration is done by having the user move their fingers along their lines of sight from his eye center to the random selected targets. The fingertips are detected in the depth camera and then the interaction of these lines of sight is calculated. Then we present how to find where the user is pointing at in different scenarios with a depth map, a detected object and a controlled virtual element. To validate our methods, we perform a point-to-screen experiment. Results demonstrate that when a user is interacting with a display up to 1.5 meters away, our natural gesture interface has an average error of 2.1cm. In conclusion, the presented technique is a viable option for a reliable user interaction.
为了帮助计算设备始终理解用户手势与周围物体之间的空间关系,提出了一种在可穿戴相机坐标系中找到用户虚拟眼中心的方法,然后精确计算出用户所指向的位置,从而进行自然交互。首先,将可穿戴式RGB-D传感器贴在用户额头周围。无需工具的校准是通过让用户沿着视线从眼睛中心移动手指到随机选择的目标来完成的。在深度相机中检测到指尖,然后计算这些视线的相互作用。然后,我们介绍了如何在不同的场景中使用深度图,检测对象和受控虚拟元素来找到用户指向的位置。为了验证我们的方法,我们执行一个点到屏幕的实验。结果表明,当用户与1.5米外的显示器交互时,我们的自然手势界面的平均误差为2.1cm。总之,所提出的技术是可靠用户交互的可行选择。
{"title":"[POSTER] Natural User Interface for Ambient Objects","authors":"Meng Ma, Kevin Merckx, P. Fallavollita, Nassir Navab","doi":"10.1109/ISMAR.2015.25","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.25","url":null,"abstract":"To help the computing device always understand the spacial relationship between the user's gesture and the ambient objects, a methodology is proposed to find the user's virtual eye center in the wearable camera coordinate system and then calculate accurately where a user is pointing at to perform the natural interaction. First, the wearable RGB-D sensor is affixed around the user forehead. A tool-free calibration is done by having the user move their fingers along their lines of sight from his eye center to the random selected targets. The fingertips are detected in the depth camera and then the interaction of these lines of sight is calculated. Then we present how to find where the user is pointing at in different scenarios with a depth map, a detected object and a controlled virtual element. To validate our methods, we perform a point-to-screen experiment. Results demonstrate that when a user is interacting with a display up to 1.5 meters away, our natural gesture interface has an average error of 2.1cm. In conclusion, the presented technique is a viable option for a reliable user interaction.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"199 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115685181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Framework to Evaluate Omnidirectional Video Coding Schemes 一个评价全向视频编码方案的框架
Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.12
Matt C. Yu, H. Lakshman, B. Girod
Omnidirectional videos of real world environments viewed on head-mounted displays with real-time head motion tracking can offer immersive visual experiences. For live streaming applications, compression is critical to reduce the bitrate. Omnidirectional videos, which are spherical in nature, are mapped onto one or more planes before encoding to interface with modern video coding standards. In this paper, we consider the problem of evaluating the coding efficiency in the context of viewing with a head-mounted display. We extract viewport based head motion trajectories, and compare the original and coded videos on the viewport. With this approach, we compare different sphere-to-plane mappings. We show that the average viewport quality can be approximated by a weighted spherical PSNR.
在头戴式显示器上观看具有实时头部运动跟踪的真实世界环境的全方位视频可以提供沉浸式视觉体验。对于实时流媒体应用程序,压缩对于降低比特率至关重要。全向视频本质上是球形的,在编码之前被映射到一个或多个平面上,以与现代视频编码标准接口。在本文中,我们考虑了在头戴式显示器观看的情况下评估编码效率的问题。我们提取基于头部运动轨迹的视口,并在视口上比较原始视频和编码视频。通过这种方法,我们可以比较不同的球面到平面映射。我们证明平均视口质量可以通过加权球形PSNR来近似。
{"title":"A Framework to Evaluate Omnidirectional Video Coding Schemes","authors":"Matt C. Yu, H. Lakshman, B. Girod","doi":"10.1109/ISMAR.2015.12","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.12","url":null,"abstract":"Omnidirectional videos of real world environments viewed on head-mounted displays with real-time head motion tracking can offer immersive visual experiences. For live streaming applications, compression is critical to reduce the bitrate. Omnidirectional videos, which are spherical in nature, are mapped onto one or more planes before encoding to interface with modern video coding standards. In this paper, we consider the problem of evaluating the coding efficiency in the context of viewing with a head-mounted display. We extract viewport based head motion trajectories, and compare the original and coded videos on the viewport. With this approach, we compare different sphere-to-plane mappings. We show that the average viewport quality can be approximated by a weighted spherical PSNR.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"216 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115067818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 344
期刊
2015 IEEE International Symposium on Mixed and Augmented Reality
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1