首页 > 最新文献

2011 10th IEEE International Symposium on Mixed and Augmented Reality最新文献

英文 中文
The Argon AR Web Browser and standards-based AR application environment Argon AR Web Browser和基于标准的AR应用环境
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092371
B. MacIntyre, A. Hill, Hafez Rouzati, Maribeth Gandy Coleman, Brian Davidson
A common vision of Augmented Reality (AR) is that of a person immersed in a diverse collection of virtual information, superimposed on their view of the world around them. If such a vision is to become reality, an ecosystem for AR must be created that satisfies at least these properties: multiple sources (or channels of interactive information) must be able to be simultaneously displayed and interacted with, channels must be isolated from each other (for security and stability), channel authors must have the flexibility to design the content and interactivity of their channel, and the application must fluidly integrate with the ever-growing cloud of systems and services that define our digital lives. In this paper, we present the design and implementation of the Argon AR Web Browser and describe our vision of an AR application environment that leverages the WWW ecosystem. We also describe KARML, our extension to KML (the spatial markup language for Google Earth and Maps), that supports the functionality required for mobile AR. We combine KARML with the full range of standard web technologies to create a standards-based web browser for mobile AR. KARML lets users develop 2D and 3D content using existing web technologies and facilitates easy deployment from standard web servers. We highlight a number of projects that have used Argon and point out the ways in which our web-based architecture has made previously impractical AR concepts possible.
增强现实(AR)的一个常见愿景是,一个人沉浸在不同的虚拟信息集合中,叠加在他们对周围世界的看法上。如果这样的愿景成为现实,必须创建一个至少满足以下属性的AR生态系统:多个来源(或交互信息的渠道)必须能够同时显示和交互,渠道必须彼此隔离(为了安全和稳定),渠道作者必须能够灵活地设计其渠道的内容和交互性,应用程序必须与定义我们数字生活的不断增长的系统和服务云流畅地集成。在本文中,我们介绍了Argon AR Web浏览器的设计和实现,并描述了我们对利用WWW生态系统的AR应用环境的愿景。我们还描述了KARML,我们对KML(谷歌地球和地图的空间标记语言)的扩展,它支持移动AR所需的功能。我们将KARML与各种标准web技术相结合,为移动AR创建基于标准的web浏览器。KARML允许用户使用现有的web技术开发2D和3D内容,并便于从标准web服务器轻松部署。我们重点介绍了一些使用Argon的项目,并指出我们基于web的架构如何使以前不切实际的AR概念成为可能。
{"title":"The Argon AR Web Browser and standards-based AR application environment","authors":"B. MacIntyre, A. Hill, Hafez Rouzati, Maribeth Gandy Coleman, Brian Davidson","doi":"10.1109/ISMAR.2011.6092371","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092371","url":null,"abstract":"A common vision of Augmented Reality (AR) is that of a person immersed in a diverse collection of virtual information, superimposed on their view of the world around them. If such a vision is to become reality, an ecosystem for AR must be created that satisfies at least these properties: multiple sources (or channels of interactive information) must be able to be simultaneously displayed and interacted with, channels must be isolated from each other (for security and stability), channel authors must have the flexibility to design the content and interactivity of their channel, and the application must fluidly integrate with the ever-growing cloud of systems and services that define our digital lives. In this paper, we present the design and implementation of the Argon AR Web Browser and describe our vision of an AR application environment that leverages the WWW ecosystem. We also describe KARML, our extension to KML (the spatial markup language for Google Earth and Maps), that supports the functionality required for mobile AR. We combine KARML with the full range of standard web technologies to create a standards-based web browser for mobile AR. KARML lets users develop 2D and 3D content using existing web technologies and facilitates easy deployment from standard web servers. We highlight a number of projects that have used Argon and point out the ways in which our web-based architecture has made previously impractical AR concepts possible.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"93 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125974453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 83
KinectFusion: Real-time dense surface mapping and tracking KinectFusion:实时密集表面映射和跟踪
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092378
Richard A. Newcombe, S. Izadi, Otmar Hilliges, D. Molyneaux, David Kim, A. Davison, Pushmeet Kohli, J. Shotton, Steve Hodges, A. Fitzgibbon
We present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware. We fuse all of the depth data streamed from a Kinect sensor into a single global implicit surface model of the observed scene in real-time. The current sensor pose is simultaneously obtained by tracking the live depth frame relative to the global model using a coarse-to-fine iterative closest point (ICP) algorithm, which uses all of the observed depth data available. We demonstrate the advantages of tracking against the growing full surface model compared with frame-to-frame tracking, obtaining tracking and mapping results in constant time within room sized scenes with limited drift and high accuracy. We also show both qualitative and quantitative results relating to various aspects of our tracking and mapping system. Modelling of natural scenes, in real-time with only commodity sensor and GPU hardware, promises an exciting step forward in augmented reality (AR), in particular, it allows dense surfaces to be reconstructed in real-time, with a level of detail and robustness beyond any solution yet presented using passive computer vision.
我们提出了一种在可变光照条件下精确实时绘制复杂和任意室内场景的系统,仅使用移动低成本深度相机和商品图形硬件。我们将来自Kinect传感器的所有深度数据流融合到一个实时观察场景的全局隐式表面模型中。通过使用粗到精迭代最近点(ICP)算法跟踪相对于全局模型的实时深度帧,同时获得当前传感器位姿,该算法使用所有可用的观测深度数据。与帧对帧跟踪相比,我们展示了针对不断增长的全表面模型进行跟踪的优势,在房间大小的场景中以有限的漂移和高精度在恒定时间内获得跟踪和映射结果。我们还展示了与我们的跟踪和绘图系统的各个方面有关的定性和定量结果。仅使用普通传感器和GPU硬件进行自然场景的实时建模,有望在增强现实(AR)领域迈出令人兴奋的一步,特别是,它允许实时重建密集的表面,其细节水平和鲁棒性超过了使用被动计算机视觉提出的任何解决方案。
{"title":"KinectFusion: Real-time dense surface mapping and tracking","authors":"Richard A. Newcombe, S. Izadi, Otmar Hilliges, D. Molyneaux, David Kim, A. Davison, Pushmeet Kohli, J. Shotton, Steve Hodges, A. Fitzgibbon","doi":"10.1109/ISMAR.2011.6092378","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092378","url":null,"abstract":"We present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware. We fuse all of the depth data streamed from a Kinect sensor into a single global implicit surface model of the observed scene in real-time. The current sensor pose is simultaneously obtained by tracking the live depth frame relative to the global model using a coarse-to-fine iterative closest point (ICP) algorithm, which uses all of the observed depth data available. We demonstrate the advantages of tracking against the growing full surface model compared with frame-to-frame tracking, obtaining tracking and mapping results in constant time within room sized scenes with limited drift and high accuracy. We also show both qualitative and quantitative results relating to various aspects of our tracking and mapping system. Modelling of natural scenes, in real-time with only commodity sensor and GPU hardware, promises an exciting step forward in augmented reality (AR), in particular, it allows dense surfaces to be reconstructed in real-time, with a level of detail and robustness beyond any solution yet presented using passive computer vision.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"402 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122371108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3961
3D high dynamic range display system 3D高动态范围显示系统
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092393
Saeko Shimazu, D. Iwai, Kosuke Sato
This paper introduces a new high dynamic range (HDR) display system that generates a physical 3D HDR image without using stereoscopic methods. To boost contrast beyond that obtained using either a hardcopy or a projector, we employ a multiprojection system to superimpose images onto a textured solid hardcopy that is output by a 3D printer or a rapid prototyping machine. We introduce two basic techniques for our 3D HDR display. The first technique computes an optimal placement of projectors so that projected images cover the hardcopy's entire surface while maximizing image quality. The second technique allows a user to place the projectors near the computed optimal position by projecting from each projector images that act as visual guides. Through proof-of-concept experiments, we were able to modulate luminance and chrominance with a registration error of less than 3 mm. The physical contrast ratio obtained using our method was approximately 5,000:1, while it was 5:1 in the case of viewing the 3D printout under environmental light and 1,000:1 in the case of using the projectors to project the image on regular screens.
本文介绍了一种新的高动态范围(HDR)显示系统,该系统可以在不使用立体方法的情况下生成物理的三维HDR图像。为了提高对比度,我们采用多投影系统将图像叠加到由3D打印机或快速原型机输出的纹理实体硬拷贝上。我们介绍了3D HDR显示的两种基本技术。第一种技术计算投影仪的最佳位置,使投影图像覆盖硬拷贝的整个表面,同时最大限度地提高图像质量。第二种技术允许用户将投影仪放置在计算的最佳位置附近,通过从每个投影仪投射充当视觉引导的图像。通过概念验证实验,我们能够以小于3毫米的配准误差调制亮度和色度。使用我们的方法获得的物理对比度约为5000:1,而在环境光下观看3D打印输出的情况下为5:1,使用投影仪在常规屏幕上投射图像的情况下为1000:1。
{"title":"3D high dynamic range display system","authors":"Saeko Shimazu, D. Iwai, Kosuke Sato","doi":"10.1109/ISMAR.2011.6092393","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092393","url":null,"abstract":"This paper introduces a new high dynamic range (HDR) display system that generates a physical 3D HDR image without using stereoscopic methods. To boost contrast beyond that obtained using either a hardcopy or a projector, we employ a multiprojection system to superimpose images onto a textured solid hardcopy that is output by a 3D printer or a rapid prototyping machine. We introduce two basic techniques for our 3D HDR display. The first technique computes an optimal placement of projectors so that projected images cover the hardcopy's entire surface while maximizing image quality. The second technique allows a user to place the projectors near the computed optimal position by projecting from each projector images that act as visual guides. Through proof-of-concept experiments, we were able to modulate luminance and chrominance with a registration error of less than 3 mm. The physical contrast ratio obtained using our method was approximately 5,000:1, while it was 5:1 in the case of viewing the 3D printout under environmental light and 1,000:1 in the case of using the projectors to project the image on regular screens.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114396577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Graph-cut-based 3D model segmentation for articulated object reconstruction 基于图形切割的三维模型分割用于铰接对象重建
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092397
Inkyu Han, Hyoungnyoun Kim, Ji-Hyung Park
The three-dimensional (3D) reconstruction of objects has been well studied in the literature of augmented reality (AR) [1, 2]. Most existing studies have assumed that the to-be-constructed target object is rigid, whereas objects in the real world can be dynamic or deformable. Therefore, AR systems are required to deal with non-rigid objects to be adaptive to environmental changes. In this paper, we address the problem of reconstructing articulated objects as a starting point for modeling deformable objects. An articulated object is composed of partially rigid components linked with joints. After building a mesh model of the object, the model is segmented into the components along their boundaries by a graph-cut-based approach that we propose.
增强现实(AR)的文献已经对物体的三维(3D)重建进行了很好的研究[1,2]。现有的大多数研究都假设要构建的目标物体是刚性的,而现实世界中的物体可以是动态的或可变形的。因此,AR系统需要处理非刚性物体以适应环境变化。在本文中,我们解决了重建铰接对象的问题,作为建模可变形对象的起点。铰接物体是由部分刚性部件与关节连接而成的。在建立对象的网格模型之后,通过我们提出的基于图形切割的方法将模型沿其边界分割成组件。
{"title":"Graph-cut-based 3D model segmentation for articulated object reconstruction","authors":"Inkyu Han, Hyoungnyoun Kim, Ji-Hyung Park","doi":"10.1109/ISMAR.2011.6092397","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092397","url":null,"abstract":"The three-dimensional (3D) reconstruction of objects has been well studied in the literature of augmented reality (AR) [1, 2]. Most existing studies have assumed that the to-be-constructed target object is rigid, whereas objects in the real world can be dynamic or deformable. Therefore, AR systems are required to deal with non-rigid objects to be adaptive to environmental changes. In this paper, we address the problem of reconstructing articulated objects as a starting point for modeling deformable objects. An articulated object is composed of partially rigid components linked with joints. After building a mesh model of the object, the model is segmented into the components along their boundaries by a graph-cut-based approach that we propose.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"91 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126125920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust planar target tracking and pose estimation from a single concavity 鲁棒平面目标跟踪和单凸姿估计
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092365
M. Donoser, P. Kontschieder, H. Bischof
In this paper we introduce a novel real-time method to track weakly textured planar objects and to simultaneously estimate their 3D pose. The basic idea is to adapt the classic tracking-by-detection approach, which seeks for the object to be tracked independently in each frame, for tracking non-textured objects. In order to robustly estimate the 3D pose of such objects in each frame, we have to tackle three demanding problems. First, we need to find a stable representation of the object which is discriminable against the background and highly repetitive. Second, we have to robustly relocate this representation in every frame, also during considerable viewpoint changes. Finally, we have to estimate the pose from a single, closed object contour. Of course, all demands shall be accommodated at low computational costs and in real-time. To attack the above mentioned problems, we propose to exploit the properties of Maximally Stable Extremal Regions (MSERs) for detecting the required contours in an efficient manner and to apply random ferns as efficient and robust classifier for tracking. To estimate the 3D pose, we construct a perspectively invariant frame on the closed contour which is intrinsically provided by the extracted MSER. In our experiments we obtain robust tracking results with accurate poses on various challenging image sequences at a single requirement: One MSER used for tracking has to have at least one concavity that sufficiently deviates from its convex hull.
本文介绍了一种实时跟踪弱纹理平面物体并同时估计其三维姿态的新方法。其基本思想是将经典的检测跟踪方法(在每帧中寻找要独立跟踪的对象)用于跟踪非纹理对象。为了在每一帧中稳健地估计这些物体的三维姿态,我们必须解决三个苛刻的问题。首先,我们需要找到一个稳定的对象表示,它对背景具有可辨别性,并且高度重复。其次,我们必须在每一帧中健壮地重新定位这种表示,在相当大的视点变化期间也是如此。最后,我们必须从一个单一的,封闭的物体轮廓估计姿态。当然,所有的需求都必须以低计算成本和实时的方式来满足。为了解决上述问题,我们建议利用最大稳定极值区域(mser)的特性来有效地检测所需的轮廓,并应用随机蕨类作为高效和鲁棒的分类器进行跟踪。为了估计三维姿态,我们在封闭轮廓上构造一个透视不变的帧,该帧本质上是由提取的MSER提供的。在我们的实验中,我们在一个单一的要求下,在各种具有挑战性的图像序列上获得了准确姿态的鲁棒跟踪结果:用于跟踪的一个MSER必须至少有一个足以偏离其凸壳的凹度。
{"title":"Robust planar target tracking and pose estimation from a single concavity","authors":"M. Donoser, P. Kontschieder, H. Bischof","doi":"10.1109/ISMAR.2011.6092365","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092365","url":null,"abstract":"In this paper we introduce a novel real-time method to track weakly textured planar objects and to simultaneously estimate their 3D pose. The basic idea is to adapt the classic tracking-by-detection approach, which seeks for the object to be tracked independently in each frame, for tracking non-textured objects. In order to robustly estimate the 3D pose of such objects in each frame, we have to tackle three demanding problems. First, we need to find a stable representation of the object which is discriminable against the background and highly repetitive. Second, we have to robustly relocate this representation in every frame, also during considerable viewpoint changes. Finally, we have to estimate the pose from a single, closed object contour. Of course, all demands shall be accommodated at low computational costs and in real-time. To attack the above mentioned problems, we propose to exploit the properties of Maximally Stable Extremal Regions (MSERs) for detecting the required contours in an efficient manner and to apply random ferns as efficient and robust classifier for tracking. To estimate the 3D pose, we construct a perspectively invariant frame on the closed contour which is intrinsically provided by the extracted MSER. In our experiments we obtain robust tracking results with accurate poses on various challenging image sequences at a single requirement: One MSER used for tracking has to have at least one concavity that sufficiently deviates from its convex hull.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"12 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132953990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
User experiences with augmented reality aided navigation on phones 手机上增强现实辅助导航的用户体验
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092390
Alessandro Mulloni, H. Seichter, D. Schmalstieg
We investigate user experiences when using augmented reality (AR) as a new aid to navigation. We integrate AR with other more common interfaces into a handheld navigation system, and we conduct an exploratory study to see where and how people exploit AR. Based on previous work on augmented photographs, we hypothesize that AR is used more to support wayfinding at static locations when users approach a road intersection. In partial contrast to this hypothesis, our results from a user evaluation hint that users will expect to use the system while walking. Further, our results also show that AR is usually exploited shortly before and after road intersections, suggesting that tracking support will be mostly needed in proximity of road intersections.
我们调查用户体验时,使用增强现实(AR)作为一种新的辅助导航。我们将AR与其他更常见的接口集成到手持导航系统中,并进行了一项探索性研究,以了解人们在哪里以及如何利用AR。基于先前对增强照片的研究,我们假设AR更多地用于支持用户接近十字路口时静态位置的寻路。与这一假设形成部分对比的是,我们的用户评估结果暗示,用户将期望在行走时使用该系统。此外,我们的研究结果还表明,AR通常在十字路口前后不久被利用,这表明在十字路口附近最需要跟踪支持。
{"title":"User experiences with augmented reality aided navigation on phones","authors":"Alessandro Mulloni, H. Seichter, D. Schmalstieg","doi":"10.1109/ISMAR.2011.6092390","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092390","url":null,"abstract":"We investigate user experiences when using augmented reality (AR) as a new aid to navigation. We integrate AR with other more common interfaces into a handheld navigation system, and we conduct an exploratory study to see where and how people exploit AR. Based on previous work on augmented photographs, we hypothesize that AR is used more to support wayfinding at static locations when users approach a road intersection. In partial contrast to this hypothesis, our results from a user evaluation hint that users will expect to use the system while walking. Further, our results also show that AR is usually exploited shortly before and after road intersections, suggesting that tracking support will be mostly needed in proximity of road intersections.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134122816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Adaptive substrate for enhanced spatial augmented reality contrast and resolution 用于增强空间增强现实对比度和分辨率的自适应衬底
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092401
Markus Broecker, Ross T. Smith, B. Thomas
This poster presents the concept of combining two display technologies to enhance graphics effects in spatial augmented reality (SAR) environments. This is achieved by using an ePaper surface as an adaptive substrate instead of a white painted surface allowing the development of novel image techniques to improve image quality and object appearance in projector-based SAR environments.
这张海报展示了结合两种显示技术来增强空间增强现实(SAR)环境中的图形效果的概念。这是通过使用电子纸表面作为自适应基板而不是白色涂漆表面来实现的,从而允许开发新的图像技术,以提高基于投影仪的SAR环境中的图像质量和物体外观。
{"title":"Adaptive substrate for enhanced spatial augmented reality contrast and resolution","authors":"Markus Broecker, Ross T. Smith, B. Thomas","doi":"10.1109/ISMAR.2011.6092401","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092401","url":null,"abstract":"This poster presents the concept of combining two display technologies to enhance graphics effects in spatial augmented reality (SAR) environments. This is achieved by using an ePaper surface as an adaptive substrate instead of a white painted surface allowing the development of novel image techniques to improve image quality and object appearance in projector-based SAR environments.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134189612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Indoor positioning and navigation for mobile AR 移动AR的室内定位和导航
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR-AMH.2011.6093646
C. Perey
The researchers and developers of mobile AR platforms need to use a common platform for developing experiences regardless of the surroundings of the user. In order to expand the use of AR both indoor and outdoor with and without computer vision techniques, the breadth of options available for positioning users and points of interest needs to expand. Separately, the experts in indoor positioning and navigation are generally not as familiar with AR use scenarios as they are with other domains. Together, positioning and navigation experts, and mobile AR experts, will discuss: — What are the indoor positioning and navigation systems best suited for mobile AR? — What studies are underway or need to be conducted in order to advance this field?
移动增强现实平台的研究人员和开发人员需要使用一个共同的平台来开发体验,而不考虑用户的周围环境。为了在室内和室外扩展AR的使用,无论是否使用计算机视觉技术,定位用户和兴趣点的可用选项的广度需要扩大。另外,室内定位和导航领域的专家通常不像其他领域的专家那样熟悉AR的使用场景。定位和导航专家以及移动增强现实专家将共同讨论:-最适合移动增强现实的室内定位和导航系统是什么?-为了推动这一领域的发展,正在进行或需要进行哪些研究?
{"title":"Indoor positioning and navigation for mobile AR","authors":"C. Perey","doi":"10.1109/ISMAR-AMH.2011.6093646","DOIUrl":"https://doi.org/10.1109/ISMAR-AMH.2011.6093646","url":null,"abstract":"The researchers and developers of mobile AR platforms need to use a common platform for developing experiences regardless of the surroundings of the user. In order to expand the use of AR both indoor and outdoor with and without computer vision techniques, the breadth of options available for positioning users and points of interest needs to expand. Separately, the experts in indoor positioning and navigation are generally not as familiar with AR use scenarios as they are with other domains. Together, positioning and navigation experts, and mobile AR experts, will discuss: — What are the indoor positioning and navigation systems best suited for mobile AR? — What studies are underway or need to be conducted in order to advance this field?","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113935836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Augmenting 3D urban environment using mobile devices 利用移动设备增强3D城市环境
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092396
Yi Wu, M. E. Choubassi, I. Kozintsev
We describe an augmented reality prototype for exploring a 3D urban environment on mobile devices. Our system utilizes the location and orientation sensors on the mobile platform as well as computer vision techniques to register the live view of the device with the 3D urban data. In particular, the system recognizes the buildings in the live video, tracks the camera pose, and augments the video with relevant information about the buildings in the correct perspective. The 3D urban data consist of 3D point clouds and corresponding geo-tagged RGB images of the urban environment. We also discuss the processing steps to make such 3D data scalable and usable by our system.
我们描述了一个用于在移动设备上探索3D城市环境的增强现实原型。我们的系统利用移动平台上的位置和方向传感器以及计算机视觉技术,将设备的实时视图与3D城市数据进行注册。特别是,该系统可以识别实时视频中的建筑物,跟踪摄像机的姿势,并在正确的视角下使用有关建筑物的相关信息来增强视频。三维城市数据由三维点云和相应的城市环境地理标记RGB图像组成。我们还讨论了处理步骤,使这些3D数据可扩展和可用我们的系统。
{"title":"Augmenting 3D urban environment using mobile devices","authors":"Yi Wu, M. E. Choubassi, I. Kozintsev","doi":"10.1109/ISMAR.2011.6092396","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092396","url":null,"abstract":"We describe an augmented reality prototype for exploring a 3D urban environment on mobile devices. Our system utilizes the location and orientation sensors on the mobile platform as well as computer vision techniques to register the live view of the device with the 3D urban data. In particular, the system recognizes the buildings in the live video, tracks the camera pose, and augments the video with relevant information about the buildings in the correct perspective. The 3D urban data consist of 3D point clouds and corresponding geo-tagged RGB images of the urban environment. We also discuss the processing steps to make such 3D data scalable and usable by our system.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124742619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Tracking-by-synthesis using point features and pyramidal blurring 利用点特征和金字塔模糊的合成跟踪
Pub Date : 2011-10-26 DOI: 10.1109/ismar.2011.6092373
Gilles Simon
Tracking-by-synthesis is a promising method for markerless vision-based camera tracking, particularly suitable for Augmented Reality applications. In particular, it is drift-free, viewpoint invariant and easy-to-combine with physical sensors such as GPS and inertial sensors. While edge features have been used succesfully within the tracking-by-synthesis framework, point features have, to our knowledge, still never been used. We believe that this is due to the fact that real-time corner detectors are generally weakly repeatable between a camera image and a rendered texture. In this paper, we compare the repeatability of commonly used FAST, Harris and SURF interest point detectors across view synthesis. We show that adding depth blur to the rendered texture can drastically improve the repeatability of FAST and Harris corner detectors (up to 100% in our experiments), which can be very helpful, e.g., to make tracking-by-synthesis running on mobile phones. We propose a method for simulating depth blur on the rendered images using a pre-calibrated depth response curve. In order to fulfil the performance requirements, a pyramidal approach is used based on the well-known MIP mapping technique. We also propose an original method for calibrating the depth response curve, which is suitable for any kind of focus lenses and comes for free in terms of programming effort, once the tracking-by-synthesis algorithm has been implemented.
合成跟踪是一种很有前途的基于无标记视觉的相机跟踪方法,特别适用于增强现实应用。特别是,它无漂移,视点不变,易于与GPS和惯性传感器等物理传感器结合使用。虽然边缘特征已经在跟踪合成框架内成功使用,但据我们所知,点特征仍然从未被使用过。我们认为这是由于实时角点检测器通常在相机图像和渲染纹理之间的可重复性很弱。在本文中,我们比较了常用的FAST、Harris和SURF兴趣点检测器在视图合成中的可重复性。我们表明,在渲染纹理中添加深度模糊可以极大地提高FAST和Harris角检测器的可重复性(在我们的实验中高达100%),这非常有帮助,例如,在手机上运行合成跟踪。我们提出了一种使用预校准的深度响应曲线来模拟渲染图像上的深度模糊的方法。为了满足性能要求,采用了基于著名的MIP映射技术的金字塔方法。我们还提出了一种校准深度响应曲线的原始方法,该方法适用于任何类型的聚焦镜头,并且在编程工作方面是免费的,一旦实现了合成跟踪算法。
{"title":"Tracking-by-synthesis using point features and pyramidal blurring","authors":"Gilles Simon","doi":"10.1109/ismar.2011.6092373","DOIUrl":"https://doi.org/10.1109/ismar.2011.6092373","url":null,"abstract":"Tracking-by-synthesis is a promising method for markerless vision-based camera tracking, particularly suitable for Augmented Reality applications. In particular, it is drift-free, viewpoint invariant and easy-to-combine with physical sensors such as GPS and inertial sensors. While edge features have been used succesfully within the tracking-by-synthesis framework, point features have, to our knowledge, still never been used. We believe that this is due to the fact that real-time corner detectors are generally weakly repeatable between a camera image and a rendered texture. In this paper, we compare the repeatability of commonly used FAST, Harris and SURF interest point detectors across view synthesis. We show that adding depth blur to the rendered texture can drastically improve the repeatability of FAST and Harris corner detectors (up to 100% in our experiments), which can be very helpful, e.g., to make tracking-by-synthesis running on mobile phones. We propose a method for simulating depth blur on the rendered images using a pre-calibrated depth response curve. In order to fulfil the performance requirements, a pyramidal approach is used based on the well-known MIP mapping technique. We also propose an original method for calibrating the depth response curve, which is suitable for any kind of focus lenses and comes for free in terms of programming effort, once the tracking-by-synthesis algorithm has been implemented.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130171694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
期刊
2011 10th IEEE International Symposium on Mixed and Augmented Reality
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1