首页 > 最新文献

2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)最新文献

英文 中文
The Influence of using Augmented Reality on Textbook Support for Learners of Different Learning Styles 增强现实技术对不同学习风格学习者教材支持的影响
Pub Date : 2016-12-12 DOI: 10.1109/ISMAR.2016.26
Jia Zhang, A. Ogan, Tzu-Chien Liu, Y. Sung, Kuo-En Chang
It has been shown in numerous studies that the application of Augmented Reality (AR) to teaching and learning is beneficial, but determining the reasons behind its effectiveness, and in particular the characteristics of students for whom an AR is best suited, can bring forth new opportunities to integrate adaptive instruction and AR in the future. Through a quasi-experimental research design, our study recruited 66 participants in an 8-week long AR-assisted learning activity, and lag sequential analysis was used to analyze participants' behavior in an AR learning environment. We found that AR was more effective in enhancing the learning gains in elementary school science of learners who prefer a Kinesthetic approach to learning. We hypothesize that these effects are due to the increase in opportunity for hands-on activities, effectively increasing learners' concentration and passion for learning.
许多研究表明,增强现实(AR)在教学和学习中的应用是有益的,但确定其有效性背后的原因,特别是最适合AR的学生的特征,可以为未来整合适应性教学和AR带来新的机会。本研究采用准实验研究设计,招募66名参与者进行为期8周的AR辅助学习活动,并采用滞后序列分析方法分析参与者在AR学习环境中的行为。我们发现,对于喜欢动觉式学习方法的小学生来说,AR能更有效地提高他们在小学科学课程中的学习效果。我们假设这些影响是由于动手活动的机会增加,有效地提高了学习者的注意力和学习热情。
{"title":"The Influence of using Augmented Reality on Textbook Support for Learners of Different Learning Styles","authors":"Jia Zhang, A. Ogan, Tzu-Chien Liu, Y. Sung, Kuo-En Chang","doi":"10.1109/ISMAR.2016.26","DOIUrl":"https://doi.org/10.1109/ISMAR.2016.26","url":null,"abstract":"It has been shown in numerous studies that the application of Augmented Reality (AR) to teaching and learning is beneficial, but determining the reasons behind its effectiveness, and in particular the characteristics of students for whom an AR is best suited, can bring forth new opportunities to integrate adaptive instruction and AR in the future. Through a quasi-experimental research design, our study recruited 66 participants in an 8-week long AR-assisted learning activity, and lag sequential analysis was used to analyze participants' behavior in an AR learning environment. We found that AR was more effective in enhancing the learning gains in elementary school science of learners who prefer a Kinesthetic approach to learning. We hypothesize that these effects are due to the increase in opportunity for hands-on activities, effectively increasing learners' concentration and passion for learning.","PeriodicalId":146808,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132692804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Practical and Precise Projector-Camera Calibration 实用和精确的投影仪-摄像机校准
Pub Date : 2016-09-19 DOI: 10.1109/ISMAR.2016.22
Liming Yang, Jean-Marie Normand, G. Moreau
Projectors are important display devices for large scale augmented reality applications. However, precisely calibrating projectors with large focus distances implies a trade-off between practicality and accuracy. People either need a huge calibration board or a precise 3D model [12]. In this paper, we present a practical projector-camera calibration method to solve this problem. The user only needs a small calibration board to calibrate the system regardless of the focus distance of the projector. Results show that the root-mean-squared re-projection error (RMSE) for a 450cm projection distance is only about 4mm, even though it is calibrated using a small B4 (250×353mm) calibration board.
投影仪是大规模增强现实应用的重要显示设备。然而,精确校准具有大聚焦距离的投影仪意味着在实用性和准确性之间进行权衡。人们要么需要一个巨大的校准板,要么需要一个精确的3D模型[12]。本文提出了一种实用的投影摄像机标定方法来解决这一问题。无论投影仪的聚焦距离如何,用户只需要一个小的校准板来校准系统。结果表明,即使使用小型B4 (250×353mm)校准板进行校准,450cm投影距离的均方根重投影误差(RMSE)也仅为4mm左右。
{"title":"Practical and Precise Projector-Camera Calibration","authors":"Liming Yang, Jean-Marie Normand, G. Moreau","doi":"10.1109/ISMAR.2016.22","DOIUrl":"https://doi.org/10.1109/ISMAR.2016.22","url":null,"abstract":"Projectors are important display devices for large scale augmented reality applications. However, precisely calibrating projectors with large focus distances implies a trade-off between practicality and accuracy. People either need a huge calibration board or a precise 3D model [12]. In this paper, we present a practical projector-camera calibration method to solve this problem. The user only needs a small calibration board to calibrate the system regardless of the focus distance of the projector. Results show that the root-mean-squared re-projection error (RMSE) for a 450cm projection distance is only about 4mm, even though it is calibrated using a small B4 (250×353mm) calibration board.","PeriodicalId":146808,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130322460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Reality Skins: Creating Immersive and Tactile Virtual Environments 现实皮肤:创造身临其境和触觉的虚拟环境
Pub Date : 2016-09-01 DOI: 10.1109/ISMAR.2016.23
Lior Shapira, D. Freedman
Reality Skins enables mobile and large-scale virtual reality experiences, dynamically generated based on the user's environment. A head-mounted display (HMD) coupled with a depth camera is used to scan the user's surroundings: reconstruct geometry, infer floor plans, and detect objects and obstacles. From these elements we generate a Reality Skin, a 3D environment which replaces office or apartment walls with the corridors of a spaceship or underground tunnels, replacing chairs and desks, sofas and beds with crates and computer consoles, fungi and crumbling ancient statues. The placement of walls, furniture and objects in the Reality Skin attempts to approximate reality, such that the user can move around, and touch virtual objects with tactile feedback from real objects. Each possible reality skins world consists of objects, materials and custom scripts. Taking cues from the user's surroundings, we create a unique environment combining these building blocks, attempting to preserve the geometry and semantics of the real world.We tackle 3D environment generation as a constraint satisfaction problem, and break it into two parts: First, we use a Markov Chain Monte-Carlo optimization, over a simple 2D polygonal model, to infer the layout of the environment (the structure of the virtual world). Then, we populate the world with various objects and characters, attempting to satisfy geometric (virtual objects should align with objects in the environment), semantic (a virtual chair aligns with a real one), physical (avoid collisions, maintain stability) and other constraints. We find a discrete set of transformations for each object satisfying unary constraints, incorporate pairwise and higher-order constraints, and optimize globally using a very recent technique based on semidefinite relaxation.
现实皮肤支持移动和大规模的虚拟现实体验,根据用户的环境动态生成。头戴式显示器(HMD)与深度摄像头相结合,用于扫描用户周围的环境:重建几何形状,推断楼层平面图,探测物体和障碍物。从这些元素中,我们产生了一个现实皮肤,一个3D环境,用宇宙飞船或地下隧道的走廊取代办公室或公寓的墙壁,用板条箱和电脑控制台取代椅子和桌子,沙发和床,真菌和摇摇晃晃的古代雕像。现实皮肤中墙壁、家具和物体的放置试图接近现实,这样用户就可以四处移动,并通过真实物体的触觉反馈触摸虚拟物体。每个可能的现实皮肤世界由对象、材料和自定义脚本组成。从用户周围的环境中获取线索,我们将这些构建模块结合在一起,创造了一个独特的环境,试图保留现实世界的几何和语义。我们将3D环境生成作为一个约束满足问题,并将其分为两部分:首先,我们使用马尔可夫链蒙特卡罗优化,在一个简单的2D多边形模型上,推断环境的布局(虚拟世界的结构)。然后,我们用各种物体和角色填充世界,试图满足几何(虚拟物体应该与环境中的物体对齐),语义(虚拟椅子与真实椅子对齐),物理(避免碰撞,保持稳定)和其他约束。我们为满足一元约束的每个对象找到一组离散的变换,结合成对和高阶约束,并使用基于半定松弛的最新技术进行全局优化。
{"title":"Reality Skins: Creating Immersive and Tactile Virtual Environments","authors":"Lior Shapira, D. Freedman","doi":"10.1109/ISMAR.2016.23","DOIUrl":"https://doi.org/10.1109/ISMAR.2016.23","url":null,"abstract":"Reality Skins enables mobile and large-scale virtual reality experiences, dynamically generated based on the user's environment. A head-mounted display (HMD) coupled with a depth camera is used to scan the user's surroundings: reconstruct geometry, infer floor plans, and detect objects and obstacles. From these elements we generate a Reality Skin, a 3D environment which replaces office or apartment walls with the corridors of a spaceship or underground tunnels, replacing chairs and desks, sofas and beds with crates and computer consoles, fungi and crumbling ancient statues. The placement of walls, furniture and objects in the Reality Skin attempts to approximate reality, such that the user can move around, and touch virtual objects with tactile feedback from real objects. Each possible reality skins world consists of objects, materials and custom scripts. Taking cues from the user's surroundings, we create a unique environment combining these building blocks, attempting to preserve the geometry and semantics of the real world.We tackle 3D environment generation as a constraint satisfaction problem, and break it into two parts: First, we use a Markov Chain Monte-Carlo optimization, over a simple 2D polygonal model, to infer the layout of the environment (the structure of the virtual world). Then, we populate the world with various objects and characters, attempting to satisfy geometric (virtual objects should align with objects in the environment), semantic (a virtual chair aligns with a real one), physical (avoid collisions, maintain stability) and other constraints. We find a discrete set of transformations for each object satisfying unary constraints, incorporate pairwise and higher-order constraints, and optimize globally using a very recent technique based on semidefinite relaxation.","PeriodicalId":146808,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"125 24","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132845714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Edge Snapping-Based Depth Enhancement for Dynamic Occlusion Handling in Augmented Reality 增强现实中基于边缘快照的动态遮挡处理深度增强
Pub Date : 2016-09-01 DOI: 10.1109/ISMAR.2016.17
Chao Du, Yen-Lin Chen, Mao Ye, Liu Ren
Dynamic occlusion handling is critical for correct depth perception in Augmented Reality (AR) applications. Consequently it is a key component to ensure realistic and immersive AR experiences. Existing solutions to tackle this challenge typically suffer from various limitations, e.g. assumption of a static scene or high computational complexity. In this work, we propose an algorithm for depth map enhancement for dynamic occlusion handling in AR applications. The key of our algorithm is an edge snapping approach, formulated as discrete optimization, that improves the consistency of object boundaries between RGB and depth data. The optimization problem is solved efficiently via dynamic programming and our system runs in near real-time on the tablet platform. Experimental evaluations demonstrate that our approach largely improves the raw sensor data and is particularly suitable compared to several related approaches in terms of both speed and quality. Furthermore, we demonstrate visually pleasing dynamic occlusion effects for multiple AR use cases based on our edge snapping results.
在增强现实(AR)应用中,动态遮挡处理对于正确的深度感知至关重要。因此,它是确保现实和沉浸式AR体验的关键组成部分。解决这一挑战的现有解决方案通常受到各种限制,例如假设静态场景或高计算复杂性。在这项工作中,我们提出了一种深度图增强算法,用于AR应用中的动态遮挡处理。该算法的关键是一种边缘捕捉方法,该方法被表述为离散优化,它提高了RGB和深度数据之间目标边界的一致性。通过动态规划有效地解决了优化问题,系统在平板平台上近乎实时地运行。实验评估表明,我们的方法在很大程度上改善了原始传感器数据,并且在速度和质量方面与几种相关方法相比特别合适。此外,我们基于边缘捕捉结果,为多个AR用例演示了视觉上令人愉悦的动态遮挡效果。
{"title":"Edge Snapping-Based Depth Enhancement for Dynamic Occlusion Handling in Augmented Reality","authors":"Chao Du, Yen-Lin Chen, Mao Ye, Liu Ren","doi":"10.1109/ISMAR.2016.17","DOIUrl":"https://doi.org/10.1109/ISMAR.2016.17","url":null,"abstract":"Dynamic occlusion handling is critical for correct depth perception in Augmented Reality (AR) applications. Consequently it is a key component to ensure realistic and immersive AR experiences. Existing solutions to tackle this challenge typically suffer from various limitations, e.g. assumption of a static scene or high computational complexity. In this work, we propose an algorithm for depth map enhancement for dynamic occlusion handling in AR applications. The key of our algorithm is an edge snapping approach, formulated as discrete optimization, that improves the consistency of object boundaries between RGB and depth data. The optimization problem is solved efficiently via dynamic programming and our system runs in near real-time on the tablet platform. Experimental evaluations demonstrate that our approach largely improves the raw sensor data and is particularly suitable compared to several related approaches in terms of both speed and quality. Furthermore, we demonstrate visually pleasing dynamic occlusion effects for multiple AR use cases based on our edge snapping results.","PeriodicalId":146808,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123101994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
PPV: Pixel-Point-Volume Segmentation for Object Referencing in Collaborative Augmented Reality PPV:协同增强现实中对象引用的像素点体分割
Pub Date : 2016-09-01 DOI: 10.1109/ISMAR.2016.21
Kuo-Chin Lien, B. Nuernberger, Tobias Höllerer, M. Turk
We present a method for collaborative augmented reality (AR) that enables users from different viewpoints to interpret object references specified via 2D on-screen circling gestures. Based on a user's 2D drawing annotation, the method segments out the userselected object using an incomplete or imperfect scene model and the color image from the drawing viewpoint. Specifically, we propose a novel segmentation algorithm that utilizes both 2D and 3D scene cues, structured into a three-layer graph of pixels, 3D points, and volumes (supervoxels), solved via standard graph cut algorithms. This segmentation enables an appropriate rendering of the user's 2D annotation from other viewpoints in 3D augmented reality. Results demonstrate the superiority of the proposed method over existing methods.
我们提出了一种协作增强现实(AR)方法,使用户能够从不同的角度解释通过2D屏幕上的圆形手势指定的对象引用。该方法基于用户的2D绘图注释,使用不完整或不完美的场景模型和绘图角度的彩色图像分割出用户选择的对象。具体来说,我们提出了一种新的分割算法,该算法利用2D和3D场景线索,构建成像素、3D点和体积(超体素)的三层图,通过标准图切算法求解。这种分割可以在3D增强现实中从其他角度适当渲染用户的2D注释。结果表明,该方法优于现有方法。
{"title":"PPV: Pixel-Point-Volume Segmentation for Object Referencing in Collaborative Augmented Reality","authors":"Kuo-Chin Lien, B. Nuernberger, Tobias Höllerer, M. Turk","doi":"10.1109/ISMAR.2016.21","DOIUrl":"https://doi.org/10.1109/ISMAR.2016.21","url":null,"abstract":"We present a method for collaborative augmented reality (AR) that enables users from different viewpoints to interpret object references specified via 2D on-screen circling gestures. Based on a user's 2D drawing annotation, the method segments out the userselected object using an incomplete or imperfect scene model and the color image from the drawing viewpoint. Specifically, we propose a novel segmentation algorithm that utilizes both 2D and 3D scene cues, structured into a three-layer graph of pixels, 3D points, and volumes (supervoxels), solved via standard graph cut algorithms. This segmentation enables an appropriate rendering of the user's 2D annotation from other viewpoints in 3D augmented reality. Results demonstrate the superiority of the proposed method over existing methods.","PeriodicalId":146808,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130673550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Augmented Reality 3D Discrepancy Check in Industrial Applications 工业应用中的增强现实3D差异检查
Pub Date : 2016-09-01 DOI: 10.1109/ISMAR.2016.15
Oliver Wasenmüller, Marcel Meyer, D. Stricker
Discrepancy check is a well-known task in industrial Augmented Reality (AR). In this paper we present a new approach consisting of three main contributions: First, we propose a new two-step depth mapping algorithm for RGB-D cameras, which fuses depth images with given camera pose in real-time into a consistent 3D model. In a rigorous evaluation with two public benchmarks we show that our mapping outperforms the state-of-the-art in accuracy. Second, we propose a semi-automatic alignment algorithm, which rapidly aligns a reference model to the reconstruction. Third, we propose an algorithm for 3D discrepancy check based on pre-computed distances. In a systematic evaluation we show the superior performance of our approach compared to state-of-the-art 3D discrepancy checks.
差异检查是工业增强现实(AR)中一个众所周知的任务。在本文中,我们提出了一种新的方法,包括三个主要贡献:首先,我们提出了一种新的RGB-D相机的两步深度映射算法,该算法将具有给定相机姿态的深度图像实时融合到一致的3D模型中。在两个公共基准的严格评估中,我们表明我们的映射在准确性方面优于最先进的技术。其次,我们提出了一种半自动对齐算法,该算法可以快速地将参考模型与重建模型对齐。第三,我们提出了一种基于预计算距离的三维差异检查算法。在系统评估中,我们展示了与最先进的3D差异检查相比,我们的方法具有优越的性能。
{"title":"Augmented Reality 3D Discrepancy Check in Industrial Applications","authors":"Oliver Wasenmüller, Marcel Meyer, D. Stricker","doi":"10.1109/ISMAR.2016.15","DOIUrl":"https://doi.org/10.1109/ISMAR.2016.15","url":null,"abstract":"Discrepancy check is a well-known task in industrial Augmented Reality (AR). In this paper we present a new approach consisting of three main contributions: First, we propose a new two-step depth mapping algorithm for RGB-D cameras, which fuses depth images with given camera pose in real-time into a consistent 3D model. In a rigorous evaluation with two public benchmarks we show that our mapping outperforms the state-of-the-art in accuracy. Second, we propose a semi-automatic alignment algorithm, which rapidly aligns a reference model to the reconstruction. Third, we propose an algorithm for 3D discrepancy check based on pre-computed distances. In a systematic evaluation we show the superior performance of our approach compared to state-of-the-art 3D discrepancy checks.","PeriodicalId":146808,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114170594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Automated Spatial Calibration of HMD Systems with Unconstrained Eye-cameras 无约束眼相机HMD系统的自动空间标定
Pub Date : 2016-09-01 DOI: 10.1109/ISMAR.2016.16
Alexander Plopski, J. Orlosky, Yuta Itoh, Christian Nitschke, K. Kiyokawa, G. Klinker
Properly calibrating an optical see-through head-mounted display (OST-HMD) and maintaining a consistent calibration over time can be a very challenging task. Automated methods need an accurate model of both the OST-HMD screen and the user's constantly changing eye-position to correctly project virtual information. While some automated methods exist, they often have restrictions, including fixed eye-cameras that cannot be adjusted for different users.To address this problem, we have developed a method that automatically determines the position of an adjustable eye-tracking camera and its unconstrained position relative to the display. Unlike methods that require a fixed pose between the HMD and eye camera, our framework allows for automatic calibration even after adjustments of the camera to a particular individual's eye and even after the HMD moves on the user's face. Using two sets of IR-LEDs rigidly attached to the camera and OST-HMD frame, we can calculate the correct projection for different eye positions in real time and changes in HMD position within several frames. To verify the accuracy of our method, we conducted two experiments with a commercial HMD by calibrating a number of different eye and camera positions. Ground truth was measured through markers on both the camera and HMD screens, and we achieve a viewing accuracy of 1.66 degrees for the eyes of 5 different experiment participants.
正确校准光学透明头戴式显示器(OST-HMD)并保持一致的校准可能是一项非常具有挑战性的任务。自动化方法需要一个精确的OST-HMD屏幕模型和用户不断变化的眼睛位置,以正确地投射虚拟信息。虽然存在一些自动化的方法,但它们通常有限制,包括固定的眼摄像头,不能根据不同的用户进行调整。为了解决这个问题,我们开发了一种方法,可以自动确定可调节眼动追踪摄像头的位置及其相对于显示器的不受约束的位置。与需要在HMD和眼睛相机之间固定姿势的方法不同,我们的框架允许自动校准,即使在相机调整到特定个人的眼睛之后,甚至在HMD移动到用户的脸上之后。通过将两组ir - led固定在摄像机和OST-HMD框架上,我们可以实时计算出不同眼睛位置的正确投影和几帧内HMD位置的变化。为了验证我们的方法的准确性,我们在商用HMD上进行了两次实验,校准了许多不同的眼睛和相机位置。地面真实度是通过相机和HMD屏幕上的标记来测量的,我们为5个不同的实验参与者的眼睛实现了1.66度的观察精度。
{"title":"Automated Spatial Calibration of HMD Systems with Unconstrained Eye-cameras","authors":"Alexander Plopski, J. Orlosky, Yuta Itoh, Christian Nitschke, K. Kiyokawa, G. Klinker","doi":"10.1109/ISMAR.2016.16","DOIUrl":"https://doi.org/10.1109/ISMAR.2016.16","url":null,"abstract":"Properly calibrating an optical see-through head-mounted display (OST-HMD) and maintaining a consistent calibration over time can be a very challenging task. Automated methods need an accurate model of both the OST-HMD screen and the user's constantly changing eye-position to correctly project virtual information. While some automated methods exist, they often have restrictions, including fixed eye-cameras that cannot be adjusted for different users.To address this problem, we have developed a method that automatically determines the position of an adjustable eye-tracking camera and its unconstrained position relative to the display. Unlike methods that require a fixed pose between the HMD and eye camera, our framework allows for automatic calibration even after adjustments of the camera to a particular individual's eye and even after the HMD moves on the user's face. Using two sets of IR-LEDs rigidly attached to the camera and OST-HMD frame, we can calculate the correct projection for different eye positions in real time and changes in HMD position within several frames. To verify the accuracy of our method, we conducted two experiments with a commercial HMD by calibrating a number of different eye and camera positions. Ground truth was measured through markers on both the camera and HMD screens, and we achieve a viewing accuracy of 1.66 degrees for the eyes of 5 different experiment participants.","PeriodicalId":146808,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128967501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Analysis of Medium Wrap Freehand Virtual Object Grasping in Exocentric Mixed Reality 外心混合现实中中缠绕徒手虚拟物体抓取分析
Pub Date : 2016-09-01 DOI: 10.1109/ISMAR.2016.14
Maadh Al Kalbani, Ian Williams, Maite Frutos Pascual
This article presents an analysis into the accuracy and problems of freehand grasping in exocentric Mixed Reality (MR). We report on two experiments (1710 grasps) which quantify the influence different virtual object shape, size and position has on the most common physical grasp, a medium wrap. We propose two methods for grasp measurement, namely, the Grasp Aperture (GAp) and Grasp Displacement (GDisp). Controlled laboratory conditions are used where 30 right-handed participants attempt to recreate a medium wrap grasp. We present a comprehensive statistical analysis of the results giving pairwise comparisons of all conditions under test. The results illustrate that user Grasp Aperture varies less than expected in comparison to the variation of virtual object size, with common aperture sizes found. Regarding the position of the virtual object, depth estimation is often mismatched due to under judgement of the z position and x, y displacement has common patterns. Results from this work can be applied to aid in the development of freehand grasping and considered as the first study into accuracy of freehand grasping in MR, provide a starting point for future interaction design.
本文分析了外心混合现实(MR)中徒手抓取的精度及其存在的问题。我们报告了两个实验(1710抓握),量化了不同的虚拟对象形状、大小和位置对最常见的物理抓握(中等包裹)的影响。我们提出了两种抓握测量方法,即抓握孔径(GAp)和抓握位移(GDisp)。在受控的实验室条件下,30名右撇子参与者试图重新创造一个中等包裹的抓握。我们提出了一个全面的统计分析的结果给出两两比较所有条件下的测试。结果表明,与虚拟物体尺寸的变化相比,用户抓取孔径的变化小于预期,发现了常见的孔径大小。对于虚拟物体的位置,由于对z位置和x、y位置的判断不足,深度估计往往不匹配。这项工作的结果可以应用于帮助开发徒手抓取,并被认为是MR中徒手抓取精度的第一个研究,为未来的交互设计提供了一个起点。
{"title":"Analysis of Medium Wrap Freehand Virtual Object Grasping in Exocentric Mixed Reality","authors":"Maadh Al Kalbani, Ian Williams, Maite Frutos Pascual","doi":"10.1109/ISMAR.2016.14","DOIUrl":"https://doi.org/10.1109/ISMAR.2016.14","url":null,"abstract":"This article presents an analysis into the accuracy and problems of freehand grasping in exocentric Mixed Reality (MR). We report on two experiments (1710 grasps) which quantify the influence different virtual object shape, size and position has on the most common physical grasp, a medium wrap. We propose two methods for grasp measurement, namely, the Grasp Aperture (GAp) and Grasp Displacement (GDisp). Controlled laboratory conditions are used where 30 right-handed participants attempt to recreate a medium wrap grasp. We present a comprehensive statistical analysis of the results giving pairwise comparisons of all conditions under test. The results illustrate that user Grasp Aperture varies less than expected in comparison to the variation of virtual object size, with common aperture sizes found. Regarding the position of the virtual object, depth estimation is often mismatched due to under judgement of the z position and x, y displacement has common patterns. Results from this work can be applied to aid in the development of freehand grasping and considered as the first study into accuracy of freehand grasping in MR, provide a starting point for future interaction design.","PeriodicalId":146808,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121370802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
σ-DVO: Sensor Noise Model Meets Dense Visual Odometry σ-DVO:传感器噪声模型满足密集视觉里程计
Pub Date : 2016-09-01 DOI: 10.1109/ISMAR.2016.11
B. W. Babu, Soohwan Kim, Zhixin Yan, Liu Ren
In this paper we propose a novel method called s-DVO for dense visual odometry using a probabilistic sensor noise model. In contrast to sparse visual odometry, where camera poses are estimated based on matched visual features, we apply dense visual odometry which makes full use of all pixel information from an RGB-D camera. Previously, t-distribution was used to model photometric and geometric errors in order to reduce the impacts of outliers in the optimization. However, this approach has the limitation that it only uses the error value to determine outliers without considering the physical process. Therefore, we propose to apply a probabilistic sensor noise model to weigh each pixel by propagating linearized uncertainty. Furthermore, we find that the geometric errors are well represented with the sensor noise model, while the photometric errors are not. Finally we propose a hybrid approach which combines t-distribution for photometric errors and a probabilistic sensor noise model for geometric errors. We extend the dense visual odometry and develop a visual SLAM system that incorporates keyframe generation, loop constraint detection and graph optimization. Experimental results with standard benchmark datasets show that our algorithm outperforms previous methods by about a 25% reduction in the absolute trajectory error.
在本文中,我们提出了一种新的方法称为s-DVO密集视觉里程计使用概率传感器噪声模型。稀疏视觉里程法是根据匹配的视觉特征估计相机姿态,而密集视觉里程法则充分利用了RGB-D相机的所有像素信息。以前,为了减少优化过程中异常值的影响,使用t分布来模拟光度和几何误差。然而,这种方法的局限性在于它只使用误差值来确定异常值,而不考虑物理过程。因此,我们建议应用概率传感器噪声模型,通过传播线性化的不确定性来衡量每个像素。此外,我们发现传感器噪声模型可以很好地表示几何误差,而光度误差则不能。最后,我们提出了一种混合方法,该方法结合了t分布的光度误差和概率传感器噪声模型的几何误差。我们扩展了密集视觉里程计,并开发了一个包含关键帧生成,环路约束检测和图形优化的视觉SLAM系统。在标准基准数据集上的实验结果表明,我们的算法比以前的方法在绝对轨迹误差上降低了约25%。
{"title":"σ-DVO: Sensor Noise Model Meets Dense Visual Odometry","authors":"B. W. Babu, Soohwan Kim, Zhixin Yan, Liu Ren","doi":"10.1109/ISMAR.2016.11","DOIUrl":"https://doi.org/10.1109/ISMAR.2016.11","url":null,"abstract":"In this paper we propose a novel method called s-DVO for dense visual odometry using a probabilistic sensor noise model. In contrast to sparse visual odometry, where camera poses are estimated based on matched visual features, we apply dense visual odometry which makes full use of all pixel information from an RGB-D camera. Previously, t-distribution was used to model photometric and geometric errors in order to reduce the impacts of outliers in the optimization. However, this approach has the limitation that it only uses the error value to determine outliers without considering the physical process. Therefore, we propose to apply a probabilistic sensor noise model to weigh each pixel by propagating linearized uncertainty. Furthermore, we find that the geometric errors are well represented with the sensor noise model, while the photometric errors are not. Finally we propose a hybrid approach which combines t-distribution for photometric errors and a probabilistic sensor noise model for geometric errors. We extend the dense visual odometry and develop a visual SLAM system that incorporates keyframe generation, loop constraint detection and graph optimization. Experimental results with standard benchmark datasets show that our algorithm outperforms previous methods by about a 25% reduction in the absolute trajectory error.","PeriodicalId":146808,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"2002 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128309484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Learning to Fuse: A Deep Learning Approach to Visual-Inertial Camera Pose Estimation 学习融合:视觉惯性相机姿态估计的深度学习方法
Pub Date : 2016-09-01 DOI: 10.1109/ISMAR.2016.19
J. Rambach, Aditya Tewari, A. Pagani, D. Stricker
Camera pose estimation is the cornerstone of Augmented Reality applications. Pose tracking based on camera images exclusively has been shown to be sensitive to motion blur, occlusions, and illumination changes. Thus, a lot of work has been conducted over the last years on visual-inertial pose tracking using acceleration and angular velocity measurements from inertial sensors in order to improve the visual tracking. Most proposed systems use statistical filtering techniques to approach the sensor fusion problem, that require complex system modelling and calibrations in order to perform adequately. In this work we present a novel approach to sensor fusion using a deep learning method to learn the relation between camera poses and inertial sensor measurements. A long short-term memory model (LSTM) is trained to provide an estimate of the current pose based on previous poses and inertial measurements. This estimates then appropriately combined with the output of a visual tracking system using a linear Kalman Filter to provide a robust final pose estimate. Our experimental results confirm the applicability and tracking performance improvement gained from the proposed sensor fusion system.
相机姿态估计是增强现实应用的基石。基于相机图像的姿态跟踪已经被证明对运动模糊、遮挡和照明变化很敏感。因此,为了改进视觉跟踪,在过去的几年里,人们进行了大量的工作,利用惯性传感器的加速度和角速度测量来进行视觉惯性姿态跟踪。大多数提出的系统使用统计滤波技术来解决传感器融合问题,这需要复杂的系统建模和校准才能充分发挥作用。在这项工作中,我们提出了一种新的传感器融合方法,使用深度学习方法来学习相机姿势和惯性传感器测量之间的关系。训练长短期记忆模型(LSTM),根据先前的姿态和惯性测量提供当前姿态的估计。然后将该估计适当地与使用线性卡尔曼滤波器的视觉跟踪系统的输出相结合,以提供鲁棒的最终姿态估计。实验结果证实了该传感器融合系统的适用性和跟踪性能的提高。
{"title":"Learning to Fuse: A Deep Learning Approach to Visual-Inertial Camera Pose Estimation","authors":"J. Rambach, Aditya Tewari, A. Pagani, D. Stricker","doi":"10.1109/ISMAR.2016.19","DOIUrl":"https://doi.org/10.1109/ISMAR.2016.19","url":null,"abstract":"Camera pose estimation is the cornerstone of Augmented Reality applications. Pose tracking based on camera images exclusively has been shown to be sensitive to motion blur, occlusions, and illumination changes. Thus, a lot of work has been conducted over the last years on visual-inertial pose tracking using acceleration and angular velocity measurements from inertial sensors in order to improve the visual tracking. Most proposed systems use statistical filtering techniques to approach the sensor fusion problem, that require complex system modelling and calibrations in order to perform adequately. In this work we present a novel approach to sensor fusion using a deep learning method to learn the relation between camera poses and inertial sensor measurements. A long short-term memory model (LSTM) is trained to provide an estimate of the current pose based on previous poses and inertial measurements. This estimates then appropriately combined with the output of a visual tracking system using a linear Kalman Filter to provide a robust final pose estimate. Our experimental results confirm the applicability and tracking performance improvement gained from the proposed sensor fusion system.","PeriodicalId":146808,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"2013 16","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120848879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
期刊
2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1