首页 > 最新文献

2011 10th IEEE International Symposium on Mixed and Augmented Reality最新文献

英文 中文
Evolutionary augmented reality at the Natural History Museum 自然历史博物馆的进化增强现实
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092400
P. Debenham, G. Thomas, Jonathan Trout
In this paper we describe the development of an augmented reality system designed to provide an exciting new way for the Natural History Museum in London to present evolutionary history to their visitors. The system uses a through-the-lens tracker and infrared LED markers to provide an unobtrusive and robust system that can operate for multiple users across a wide area.
在本文中,我们描述了一种增强现实系统的开发,该系统旨在为伦敦自然历史博物馆提供一种令人兴奋的新方式,向游客展示进化史。该系统使用了一个穿过镜头的跟踪器和红外LED标记,提供了一个不显眼的、强大的系统,可以在广泛的区域内为多个用户操作。
{"title":"Evolutionary augmented reality at the Natural History Museum","authors":"P. Debenham, G. Thomas, Jonathan Trout","doi":"10.1109/ISMAR.2011.6092400","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092400","url":null,"abstract":"In this paper we describe the development of an augmented reality system designed to provide an exciting new way for the Natural History Museum in London to present evolutionary history to their visitors. The system uses a through-the-lens tracker and infrared LED markers to provide an unobtrusive and robust system that can operate for multiple users across a wide area.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"191 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132469333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Homography-based planar mapping and tracking for mobile phones 基于同形图的移动电话平面映射与跟踪
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092367
Christian Pirchheim, Gerhard Reitmayr
We present a real-time camera pose tracking and mapping system which uses the assumption of a planar scene to implement a highly efficient mapping algorithm. Our light-weight mapping approach is based on keyframes and plane-induced homographies between them. We solve the planar reconstruction problem of estimating the keyframe poses with an efficient image rectification algorithm. Camera pose tracking uses continuously extended and refined planar point maps and delivers robustly estimated 6DOF poses. We compare system and method with bundle adjustment and monocular SLAM on synthetic and indoor image sequences. We demonstrate large savings in computational effort compared to the monocular SLAM system while the reduction in accuracy remains acceptable.
提出了一种实时摄像机姿态跟踪与映射系统,该系统采用平面场景假设实现了一种高效的映射算法。我们的轻量级映射方法是基于关键帧和它们之间的平面诱导同构。我们用一种高效的图像校正算法解决了关键帧姿态估计的平面重构问题。相机姿态跟踪使用不断扩展和改进的平面点映射,并提供可靠的估计6DOF姿态。对合成图像序列和室内图像序列进行了束调整和单目SLAM系统和方法的比较。与单目SLAM系统相比,我们证明了在计算工作量方面的大量节省,同时精度的降低仍然是可以接受的。
{"title":"Homography-based planar mapping and tracking for mobile phones","authors":"Christian Pirchheim, Gerhard Reitmayr","doi":"10.1109/ISMAR.2011.6092367","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092367","url":null,"abstract":"We present a real-time camera pose tracking and mapping system which uses the assumption of a planar scene to implement a highly efficient mapping algorithm. Our light-weight mapping approach is based on keyframes and plane-induced homographies between them. We solve the planar reconstruction problem of estimating the keyframe poses with an efficient image rectification algorithm. Camera pose tracking uses continuously extended and refined planar point maps and delivers robustly estimated 6DOF poses. We compare system and method with bundle adjustment and monocular SLAM on synthetic and indoor image sequences. We demonstrate large savings in computational effort compared to the monocular SLAM system while the reduction in accuracy remains acceptable.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127989781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Gravity-aware handheld Augmented Reality 重力感知手持增强现实
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092376
Daniel Kurz, Selim Benhimane
This paper investigates how different stages in handheld Augmented Reality (AR) applications can benefit from knowing the direction of the gravity measured with inertial sensors. It presents approaches to improve the description and matching of feature points, detection and tracking of planar templates, and the visual quality of the rendering of virtual 3D objects by incorporating the gravity vector. In handheld AR, both the camera and the display are located in the user's hand and therefore can be freely moved. The pose of the camera is generally determined with respect to piecewise planar objects that have a known static orientation with respect to gravity. In the presence of (close to) vertical surfaces, we show how gravity-aligned feature descriptors (GAFD) improve the initialization of tracking algorithms relying on feature point descriptor-based approaches in terms of quality and performance. For (close to) horizontal surfaces, we propose to use the gravity vector to rectify the camera image and detect and describe features in the rectified image. The resulting gravity-rectified feature descriptors (GREFD) provide an improved precision-recall characteristic and enable faster initialization, in particular under steep viewing angles. Gravity-rectified camera images also allow for real-time 6 DoF pose estimation using an edge-based object detection algorithm handling only 4 DoF similarity transforms. Finally, the rendering of virtual 3D objects can be made more realistic and plausible by taking into account the orientation of the gravitational force in addition to the relative pose between the handheld device and a real object.
本文研究了手持增强现实(AR)应用中的不同阶段如何从知道惯性传感器测量的重力方向中受益。提出了利用重力矢量改进特征点的描述和匹配、平面模板的检测和跟踪以及虚拟三维物体渲染的视觉质量的方法。在手持式增强现实中,摄像头和显示器都位于用户的手中,因此可以自由移动。相机的姿态通常是根据相对于重力具有已知静态方向的分段平面物体来确定的。在存在(接近)垂直表面的情况下,我们展示了重力对齐特征描述符(GAFD)如何在质量和性能方面改进基于特征点描述符的方法的跟踪算法的初始化。对于(接近)水平表面,我们建议使用重力矢量对相机图像进行校正,并检测和描述校正后图像中的特征。由此产生的重力校正特征描述符(GREFD)提供了改进的精确召回特性,并实现了更快的初始化,特别是在陡峭的视角下。重力校正相机图像还允许使用基于边缘的对象检测算法处理仅4 DoF相似变换的实时6 DoF姿态估计。最后,除了手持设备与真实物体之间的相对姿态之外,通过考虑重力的方向,可以使虚拟3D物体的渲染更加逼真和可信。
{"title":"Gravity-aware handheld Augmented Reality","authors":"Daniel Kurz, Selim Benhimane","doi":"10.1109/ISMAR.2011.6092376","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092376","url":null,"abstract":"This paper investigates how different stages in handheld Augmented Reality (AR) applications can benefit from knowing the direction of the gravity measured with inertial sensors. It presents approaches to improve the description and matching of feature points, detection and tracking of planar templates, and the visual quality of the rendering of virtual 3D objects by incorporating the gravity vector. In handheld AR, both the camera and the display are located in the user's hand and therefore can be freely moved. The pose of the camera is generally determined with respect to piecewise planar objects that have a known static orientation with respect to gravity. In the presence of (close to) vertical surfaces, we show how gravity-aligned feature descriptors (GAFD) improve the initialization of tracking algorithms relying on feature point descriptor-based approaches in terms of quality and performance. For (close to) horizontal surfaces, we propose to use the gravity vector to rectify the camera image and detect and describe features in the rectified image. The resulting gravity-rectified feature descriptors (GREFD) provide an improved precision-recall characteristic and enable faster initialization, in particular under steep viewing angles. Gravity-rectified camera images also allow for real-time 6 DoF pose estimation using an edge-based object detection algorithm handling only 4 DoF similarity transforms. Finally, the rendering of virtual 3D objects can be made more realistic and plausible by taking into account the orientation of the gravitational force in addition to the relative pose between the handheld device and a real object.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131420935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
Texture-less object tracking with online training using an RGB-D camera 使用RGB-D相机进行在线训练的无纹理对象跟踪
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092377
Youngmin Park, V. Lepetit, Woontack Woo
We propose a texture-less object detection and 3D tracking method which automatically extracts on the fly the information it needs from color images and the corresponding depth maps. While texture-less 3D tracking is not new, it requires a prior CAD model, and real-time methods for detection still have to be developed for robust tracking. To detect the target, we propose to rely on a fast template-based method, which provides an initial estimate of its 3D pose, and we refine this estimate using the depth and image contours information. We automatically extract a 3D model for the target from the depth information. To this end, we developed methods to enhance the depth map and to stabilize the 3D pose estimation. We demonstrate our method on challenging sequences exhibiting partial occlusions and fast motions.
提出了一种无纹理目标检测和三维跟踪方法,该方法可以实时从彩色图像和相应的深度图中自动提取所需的信息。虽然无纹理3D跟踪并不新鲜,但它需要事先的CAD模型,并且仍然需要开发用于鲁棒跟踪的实时检测方法。为了检测目标,我们提出了一种基于快速模板的方法,该方法提供了目标三维姿态的初始估计,并使用深度和图像轮廓信息对该估计进行了改进。我们从深度信息中自动提取目标的三维模型。为此,我们开发了增强深度图和稳定三维姿态估计的方法。我们展示了我们的方法在具有挑战性的序列显示部分闭塞和快速运动。
{"title":"Texture-less object tracking with online training using an RGB-D camera","authors":"Youngmin Park, V. Lepetit, Woontack Woo","doi":"10.1109/ISMAR.2011.6092377","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092377","url":null,"abstract":"We propose a texture-less object detection and 3D tracking method which automatically extracts on the fly the information it needs from color images and the corresponding depth maps. While texture-less 3D tracking is not new, it requires a prior CAD model, and real-time methods for detection still have to be developed for robust tracking. To detect the target, we propose to rely on a fast template-based method, which provides an initial estimate of its 3D pose, and we refine this estimate using the depth and image contours information. We automatically extract a 3D model for the target from the depth information. To this end, we developed methods to enhance the depth map and to stabilize the 3D pose estimation. We demonstrate our method on challenging sequences exhibiting partial occlusions and fast motions.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127673416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
Outdoor mobile localization from panoramic imagery 基于全景图像的户外移动定位
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092399
Jonathan Ventura, Tobias Höllerer
We describe an end-to-end system for mobile, vision-based localization and tracking in urban environments. Our system uses panoramic imagery which is processed and indexed to provide localization coverage over a large area using few capture points. We utilize a client-server model which allows for remote computation and data storage while maintaining real-time tracking performance. Previous search results are cached and re-used by the mobile client to minimize communication overhead. We evaluate the use of the system for flexible real-time camera tracking in large outdoor spaces.
我们描述了一个端到端的系统,用于城市环境中基于视觉的移动定位和跟踪。我们的系统使用经过处理和索引的全景图像,使用少量捕获点提供大面积的本地化覆盖。我们利用客户端-服务器模型,允许远程计算和数据存储,同时保持实时跟踪性能。以前的搜索结果被缓存并由移动客户端重用,以最小化通信开销。我们评估了该系统在大型户外空间中灵活实时摄像机跟踪的使用情况。
{"title":"Outdoor mobile localization from panoramic imagery","authors":"Jonathan Ventura, Tobias Höllerer","doi":"10.1109/ISMAR.2011.6092399","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092399","url":null,"abstract":"We describe an end-to-end system for mobile, vision-based localization and tracking in urban environments. Our system uses panoramic imagery which is processed and indexed to provide localization coverage over a large area using few capture points. We utilize a client-server model which allows for remote computation and data storage while maintaining real-time tracking performance. Previous search results are cached and re-used by the mobile client to minimize communication overhead. We evaluate the use of the system for flexible real-time camera tracking in large outdoor spaces.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130358294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Evaluating the impact of recovery density on augmented reality tracking 评估恢复密度对增强现实跟踪的影响
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092374
Christopher Coffin, Cha Lee, Tobias Höllerer
Natural feature tracking systems for augmented reality are highly accurate, but can suffer from lost tracking. When registration is lost, the system must be able to re-localize and recover tracking. Likewise, when a camera is new to a scene, it must be able to perform the related task of localization. Localization and re-localization can only be performed at certain points or when viewing particular objects or parts of the scene with a sufficient number and quality of recognizable features to allow for tracking recovery. We explore how the density of such recovery locations/poses influences the time it takes users to resume tracking. We focus our evaluation on two generalized techniques for localization: keyframe-based and model-based. For the keyframe-based approach we assume a constant collection rate for keyframes. We find that at practical collection rates, the task of localization to a previously acquired keyframe that is shown to the user does not become more time-consuming as the interval between keyframes increases. For a localization approach using model data, we consider a grid of points around the model at which localization is guaranteed to succeed. We find that the user interface is crucial to successful localization. Localization can occur quickly if users do not need to orient themselves to marked localization points. When users are forced to mentally register themselves with a map of the scene, localization quickly becomes impractical as the distance to the next localization point increases. We contend that our results will help future designers of localization techniques to better plan for the effects of their proposed solutions.
用于增强现实的自然特征跟踪系统非常精确,但可能会丢失跟踪。当注册丢失时,系统必须能够重新定位和恢复跟踪。同样地,当相机刚进入一个场景时,它必须能够执行相关的定位任务。定位和重新定位只能在特定的点执行,或者当看到具有足够数量和质量的可识别特征的特定对象或场景部分时才能进行跟踪恢复。我们探讨了这种恢复位置/姿势的密度如何影响用户恢复跟踪所需的时间。我们重点评估了两种广义的定位技术:基于关键帧和基于模型的定位技术。对于基于关键帧的方法,我们假设关键帧的收集速率是恒定的。我们发现,在实际的收集速率下,定位到显示给用户的先前获取的关键帧的任务不会随着关键帧之间的间隔增加而变得更耗时。对于使用模型数据的定位方法,我们考虑一个围绕模型的点网格,在这个网格上定位保证成功。我们发现用户界面是成功本土化的关键。如果用户不需要将自己定位到标记的定位点,则可以快速进行定位。当用户被迫在脑海中对场景地图进行定位时,随着到下一个定位点的距离增加,定位很快变得不切实际。我们认为,我们的研究结果将有助于未来本地化技术的设计者更好地规划他们提出的解决方案的效果。
{"title":"Evaluating the impact of recovery density on augmented reality tracking","authors":"Christopher Coffin, Cha Lee, Tobias Höllerer","doi":"10.1109/ISMAR.2011.6092374","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092374","url":null,"abstract":"Natural feature tracking systems for augmented reality are highly accurate, but can suffer from lost tracking. When registration is lost, the system must be able to re-localize and recover tracking. Likewise, when a camera is new to a scene, it must be able to perform the related task of localization. Localization and re-localization can only be performed at certain points or when viewing particular objects or parts of the scene with a sufficient number and quality of recognizable features to allow for tracking recovery. We explore how the density of such recovery locations/poses influences the time it takes users to resume tracking. We focus our evaluation on two generalized techniques for localization: keyframe-based and model-based. For the keyframe-based approach we assume a constant collection rate for keyframes. We find that at practical collection rates, the task of localization to a previously acquired keyframe that is shown to the user does not become more time-consuming as the interval between keyframes increases. For a localization approach using model data, we consider a grid of points around the model at which localization is guaranteed to succeed. We find that the user interface is crucial to successful localization. Localization can occur quickly if users do not need to orient themselves to marked localization points. When users are forced to mentally register themselves with a map of the scene, localization quickly becomes impractical as the distance to the next localization point increases. We contend that our results will help future designers of localization techniques to better plan for the effects of their proposed solutions.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124785518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Transformative reality: Augmented reality for visual prostheses 变革现实:视觉假体的增强现实
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092402
W. Lui, D. Browne, L. Kleeman, T. Drummond, Wai Ho Li
Visual prostheses such as retinal implants provide bionic vision that is limited in spatial and intensity resolution. This limitation is a fundamental challenge of bionic vision as it severely truncates salient visual information. We propose to address this challenge by performing real time transformations of visual and non-visual sensor data into symbolic representations that are then rendered as low resolution vision; a concept we call Transformative Reality. For example, a depth camera allows the detection of empty ground in cluttered environments that is then visually rendered as bionic vision to enable indoor navigation. Such symbolic representations are similar to virtual content overlays used in Augmented Reality but are registered to the 3D world via the user's sense of touch. Preliminary user trials, where a head mounted display artificially constrains vision to a 25×25 grid of binary dots, suggest that Transformative Reality provides practical and significant improvements over traditional bionic vision in tasks such as indoor navigation, object localisation and people detection.
视觉假体如视网膜植入物提供的仿生视觉在空间和强度分辨率上是有限的。这种限制是仿生视觉的一个基本挑战,因为它严重截断了显著的视觉信息。我们建议通过将视觉和非视觉传感器数据实时转换为符号表示,然后呈现为低分辨率视觉来解决这一挑战;我们称之为变革现实的概念。例如,深度相机可以在混乱的环境中检测空旷的地面,然后在视觉上呈现为仿生视觉,从而实现室内导航。这种符号表示类似于增强现实中使用的虚拟内容覆盖,但通过用户的触觉注册到3D世界。在初步的用户试验中,头戴式显示器人为地将视觉限制在25×25二进制点网格上,这表明,在室内导航、物体定位和人员检测等任务中,“变革现实”技术比传统的仿生视觉技术提供了实用而显著的改进。
{"title":"Transformative reality: Augmented reality for visual prostheses","authors":"W. Lui, D. Browne, L. Kleeman, T. Drummond, Wai Ho Li","doi":"10.1109/ISMAR.2011.6092402","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092402","url":null,"abstract":"Visual prostheses such as retinal implants provide bionic vision that is limited in spatial and intensity resolution. This limitation is a fundamental challenge of bionic vision as it severely truncates salient visual information. We propose to address this challenge by performing real time transformations of visual and non-visual sensor data into symbolic representations that are then rendered as low resolution vision; a concept we call Transformative Reality. For example, a depth camera allows the detection of empty ground in cluttered environments that is then visually rendered as bionic vision to enable indoor navigation. Such symbolic representations are similar to virtual content overlays used in Augmented Reality but are registered to the 3D world via the user's sense of touch. Preliminary user trials, where a head mounted display artificially constrains vision to a 25×25 grid of binary dots, suggest that Transformative Reality provides practical and significant improvements over traditional bionic vision in tasks such as indoor navigation, object localisation and people detection.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116867368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Urban canvas: Unfreezing street-view imagery with semantically compressed LIDAR pointclouds 城市画布:用语义压缩的激光雷达点云解冻街景图像
Pub Date : 2011-10-01 DOI: 10.1109/ISMAR.2011.6143897
Thommen Korah, Yun-Ta Tsai
Detailed 3D scans of urban environments are increasingly being collected with the goal of bringing more location-aware content to mobile users. This work converts large collections of LIDAR scans and street-view panoramas into a representation that extracts semantically meaningful components of the scene. Compressing this data by an order of magnitude or more enables rich user interactions with mobile applications that have a very good knowledge of the scene around them. These representations are suitable for integrating into physics engines and transmission over mobile networks — key components of modern AR entertainment solutions.
人们越来越多地收集城市环境的详细3D扫描图,目的是为移动用户提供更多的位置感知内容。这项工作将大量激光雷达扫描和街景全景图转换为提取场景语义上有意义的组件的表示。将这些数据压缩一个数量级或更多,可以让用户与非常了解周围场景的移动应用程序进行丰富的交互。这些表示适合集成到物理引擎和移动网络上的传输——现代AR娱乐解决方案的关键组件。
{"title":"Urban canvas: Unfreezing street-view imagery with semantically compressed LIDAR pointclouds","authors":"Thommen Korah, Yun-Ta Tsai","doi":"10.1109/ISMAR.2011.6143897","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6143897","url":null,"abstract":"Detailed 3D scans of urban environments are increasingly being collected with the goal of bringing more location-aware content to mobile users. This work converts large collections of LIDAR scans and street-view panoramas into a representation that extracts semantically meaningful components of the scene. Compressing this data by an order of magnitude or more enables rich user interactions with mobile applications that have a very good knowledge of the scene around them. These representations are suitable for integrating into physics engines and transmission over mobile networks — key components of modern AR entertainment solutions.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129262146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An empiric evaluation of confirmation methods for optical see-through head-mounted display calibration 光学透明头戴式显示器校准确认方法的经验评价
Pub Date : 2011-10-01 DOI: 10.2312/EGVE/JVRC12/073-080
P. Maier, Arindam Dey, C. Waechter, C. Sandor, M. Tönnis, G. Klinker
The calibration of optical see-through head-mounted displays is an important fundament for correct object alignment in augmented reality. Any calibration process for OSTHMDs requires users to align 2D points in screen space with 3D points in the real world and to confirm each alignment. In this poster, we present the results of our empiric evaluation where we compared four confirmation methods: Keyboard, Hand-held, Voice, and Waiting. The Waiting method, designed to reduce head motion during confirmation, showed a significantly higher accuracy than all other methods. Averaging over a time frame for sampling user input before the time of confirmation improved the accuracy of all methods in addition. We conducted a further expert study proving that the results achieved with a video see-through head-mounted display showed valid for optical see-through head-mounted display calibration, too.
光学透明头戴式显示器的标定是增强现实中正确定位物体的重要基础。osthmd的任何校准过程都需要用户将屏幕空间中的2D点与现实世界中的3D点对齐,并确认每次对齐。在这张海报中,我们展示了我们的经验评估结果,我们比较了四种确认方法:键盘、手持、语音和等待。等待法旨在减少确认过程中的头部运动,其准确性明显高于所有其他方法。在确认时间之前对采样用户输入进行时间范围内的平均,还提高了所有方法的准确性。我们进行了进一步的专家研究,证明通过视频透明头戴式显示器获得的结果也适用于光学透明头戴式显示器校准。
{"title":"An empiric evaluation of confirmation methods for optical see-through head-mounted display calibration","authors":"P. Maier, Arindam Dey, C. Waechter, C. Sandor, M. Tönnis, G. Klinker","doi":"10.2312/EGVE/JVRC12/073-080","DOIUrl":"https://doi.org/10.2312/EGVE/JVRC12/073-080","url":null,"abstract":"The calibration of optical see-through head-mounted displays is an important fundament for correct object alignment in augmented reality. Any calibration process for OSTHMDs requires users to align 2D points in screen space with 3D points in the real world and to confirm each alignment. In this poster, we present the results of our empiric evaluation where we compared four confirmation methods: Keyboard, Hand-held, Voice, and Waiting. The Waiting method, designed to reduce head motion during confirmation, showed a significantly higher accuracy than all other methods. Averaging over a time frame for sampling user input before the time of confirmation improved the accuracy of all methods in addition. We conducted a further expert study proving that the results achieved with a video see-through head-mounted display showed valid for optical see-through head-mounted display calibration, too.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115337676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Augmenting magnetic field lines for school experiments 增强磁力线用于学校实验
Pub Date : 2011-10-01 DOI: 10.1109/ISMAR.2011.6143893
F. Mannuß, J. Rubel, Clemens Wagner, F. Bingel, André Hinkenjann
We present a system for interactive magnetic field simulation in an AR-setup. The aim of this work is to investigate how AR technology can help to develop a better understanding of the concept of fields and field lines and their relationship to the magnetic forces in typical school experiments. The haptic feedback is provided by real magnets that are optically tracked. In a stereo video see-through head-mounted display, the magnets are augmented with the dynamically computed field lines.
我们提出了一种在ar装置中进行交互磁场模拟的系统。这项工作的目的是研究AR技术如何在典型的学校实验中帮助人们更好地理解场和场线的概念及其与磁力的关系。触觉反馈是由光学跟踪的真实磁铁提供的。在立体可视头戴式显示器中,磁体与动态计算的场线相增强。
{"title":"Augmenting magnetic field lines for school experiments","authors":"F. Mannuß, J. Rubel, Clemens Wagner, F. Bingel, André Hinkenjann","doi":"10.1109/ISMAR.2011.6143893","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6143893","url":null,"abstract":"We present a system for interactive magnetic field simulation in an AR-setup. The aim of this work is to investigate how AR technology can help to develop a better understanding of the concept of fields and field lines and their relationship to the magnetic forces in typical school experiments. The haptic feedback is provided by real magnets that are optically tracked. In a stereo video see-through head-mounted display, the magnets are augmented with the dynamically computed field lines.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124748441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
期刊
2011 10th IEEE International Symposium on Mixed and Augmented Reality
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1