首页 > 最新文献

2011 10th IEEE International Symposium on Mixed and Augmented Reality最新文献

英文 中文
The Argon AR Web Browser and standards-based AR application environment Argon AR Web Browser和基于标准的AR应用环境
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092371
B. MacIntyre, A. Hill, Hafez Rouzati, Maribeth Gandy Coleman, Brian Davidson
A common vision of Augmented Reality (AR) is that of a person immersed in a diverse collection of virtual information, superimposed on their view of the world around them. If such a vision is to become reality, an ecosystem for AR must be created that satisfies at least these properties: multiple sources (or channels of interactive information) must be able to be simultaneously displayed and interacted with, channels must be isolated from each other (for security and stability), channel authors must have the flexibility to design the content and interactivity of their channel, and the application must fluidly integrate with the ever-growing cloud of systems and services that define our digital lives. In this paper, we present the design and implementation of the Argon AR Web Browser and describe our vision of an AR application environment that leverages the WWW ecosystem. We also describe KARML, our extension to KML (the spatial markup language for Google Earth and Maps), that supports the functionality required for mobile AR. We combine KARML with the full range of standard web technologies to create a standards-based web browser for mobile AR. KARML lets users develop 2D and 3D content using existing web technologies and facilitates easy deployment from standard web servers. We highlight a number of projects that have used Argon and point out the ways in which our web-based architecture has made previously impractical AR concepts possible.
增强现实(AR)的一个常见愿景是,一个人沉浸在不同的虚拟信息集合中,叠加在他们对周围世界的看法上。如果这样的愿景成为现实,必须创建一个至少满足以下属性的AR生态系统:多个来源(或交互信息的渠道)必须能够同时显示和交互,渠道必须彼此隔离(为了安全和稳定),渠道作者必须能够灵活地设计其渠道的内容和交互性,应用程序必须与定义我们数字生活的不断增长的系统和服务云流畅地集成。在本文中,我们介绍了Argon AR Web浏览器的设计和实现,并描述了我们对利用WWW生态系统的AR应用环境的愿景。我们还描述了KARML,我们对KML(谷歌地球和地图的空间标记语言)的扩展,它支持移动AR所需的功能。我们将KARML与各种标准web技术相结合,为移动AR创建基于标准的web浏览器。KARML允许用户使用现有的web技术开发2D和3D内容,并便于从标准web服务器轻松部署。我们重点介绍了一些使用Argon的项目,并指出我们基于web的架构如何使以前不切实际的AR概念成为可能。
{"title":"The Argon AR Web Browser and standards-based AR application environment","authors":"B. MacIntyre, A. Hill, Hafez Rouzati, Maribeth Gandy Coleman, Brian Davidson","doi":"10.1109/ISMAR.2011.6092371","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092371","url":null,"abstract":"A common vision of Augmented Reality (AR) is that of a person immersed in a diverse collection of virtual information, superimposed on their view of the world around them. If such a vision is to become reality, an ecosystem for AR must be created that satisfies at least these properties: multiple sources (or channels of interactive information) must be able to be simultaneously displayed and interacted with, channels must be isolated from each other (for security and stability), channel authors must have the flexibility to design the content and interactivity of their channel, and the application must fluidly integrate with the ever-growing cloud of systems and services that define our digital lives. In this paper, we present the design and implementation of the Argon AR Web Browser and describe our vision of an AR application environment that leverages the WWW ecosystem. We also describe KARML, our extension to KML (the spatial markup language for Google Earth and Maps), that supports the functionality required for mobile AR. We combine KARML with the full range of standard web technologies to create a standards-based web browser for mobile AR. KARML lets users develop 2D and 3D content using existing web technologies and facilitates easy deployment from standard web servers. We highlight a number of projects that have used Argon and point out the ways in which our web-based architecture has made previously impractical AR concepts possible.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"93 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125974453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 83
KinectFusion: Real-time dense surface mapping and tracking KinectFusion:实时密集表面映射和跟踪
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092378
Richard A. Newcombe, S. Izadi, Otmar Hilliges, D. Molyneaux, David Kim, A. Davison, Pushmeet Kohli, J. Shotton, Steve Hodges, A. Fitzgibbon
We present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware. We fuse all of the depth data streamed from a Kinect sensor into a single global implicit surface model of the observed scene in real-time. The current sensor pose is simultaneously obtained by tracking the live depth frame relative to the global model using a coarse-to-fine iterative closest point (ICP) algorithm, which uses all of the observed depth data available. We demonstrate the advantages of tracking against the growing full surface model compared with frame-to-frame tracking, obtaining tracking and mapping results in constant time within room sized scenes with limited drift and high accuracy. We also show both qualitative and quantitative results relating to various aspects of our tracking and mapping system. Modelling of natural scenes, in real-time with only commodity sensor and GPU hardware, promises an exciting step forward in augmented reality (AR), in particular, it allows dense surfaces to be reconstructed in real-time, with a level of detail and robustness beyond any solution yet presented using passive computer vision.
我们提出了一种在可变光照条件下精确实时绘制复杂和任意室内场景的系统,仅使用移动低成本深度相机和商品图形硬件。我们将来自Kinect传感器的所有深度数据流融合到一个实时观察场景的全局隐式表面模型中。通过使用粗到精迭代最近点(ICP)算法跟踪相对于全局模型的实时深度帧,同时获得当前传感器位姿,该算法使用所有可用的观测深度数据。与帧对帧跟踪相比,我们展示了针对不断增长的全表面模型进行跟踪的优势,在房间大小的场景中以有限的漂移和高精度在恒定时间内获得跟踪和映射结果。我们还展示了与我们的跟踪和绘图系统的各个方面有关的定性和定量结果。仅使用普通传感器和GPU硬件进行自然场景的实时建模,有望在增强现实(AR)领域迈出令人兴奋的一步,特别是,它允许实时重建密集的表面,其细节水平和鲁棒性超过了使用被动计算机视觉提出的任何解决方案。
{"title":"KinectFusion: Real-time dense surface mapping and tracking","authors":"Richard A. Newcombe, S. Izadi, Otmar Hilliges, D. Molyneaux, David Kim, A. Davison, Pushmeet Kohli, J. Shotton, Steve Hodges, A. Fitzgibbon","doi":"10.1109/ISMAR.2011.6092378","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092378","url":null,"abstract":"We present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware. We fuse all of the depth data streamed from a Kinect sensor into a single global implicit surface model of the observed scene in real-time. The current sensor pose is simultaneously obtained by tracking the live depth frame relative to the global model using a coarse-to-fine iterative closest point (ICP) algorithm, which uses all of the observed depth data available. We demonstrate the advantages of tracking against the growing full surface model compared with frame-to-frame tracking, obtaining tracking and mapping results in constant time within room sized scenes with limited drift and high accuracy. We also show both qualitative and quantitative results relating to various aspects of our tracking and mapping system. Modelling of natural scenes, in real-time with only commodity sensor and GPU hardware, promises an exciting step forward in augmented reality (AR), in particular, it allows dense surfaces to be reconstructed in real-time, with a level of detail and robustness beyond any solution yet presented using passive computer vision.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"402 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122371108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3961
3D high dynamic range display system 3D高动态范围显示系统
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092393
Saeko Shimazu, D. Iwai, Kosuke Sato
This paper introduces a new high dynamic range (HDR) display system that generates a physical 3D HDR image without using stereoscopic methods. To boost contrast beyond that obtained using either a hardcopy or a projector, we employ a multiprojection system to superimpose images onto a textured solid hardcopy that is output by a 3D printer or a rapid prototyping machine. We introduce two basic techniques for our 3D HDR display. The first technique computes an optimal placement of projectors so that projected images cover the hardcopy's entire surface while maximizing image quality. The second technique allows a user to place the projectors near the computed optimal position by projecting from each projector images that act as visual guides. Through proof-of-concept experiments, we were able to modulate luminance and chrominance with a registration error of less than 3 mm. The physical contrast ratio obtained using our method was approximately 5,000:1, while it was 5:1 in the case of viewing the 3D printout under environmental light and 1,000:1 in the case of using the projectors to project the image on regular screens.
本文介绍了一种新的高动态范围(HDR)显示系统,该系统可以在不使用立体方法的情况下生成物理的三维HDR图像。为了提高对比度,我们采用多投影系统将图像叠加到由3D打印机或快速原型机输出的纹理实体硬拷贝上。我们介绍了3D HDR显示的两种基本技术。第一种技术计算投影仪的最佳位置,使投影图像覆盖硬拷贝的整个表面,同时最大限度地提高图像质量。第二种技术允许用户将投影仪放置在计算的最佳位置附近,通过从每个投影仪投射充当视觉引导的图像。通过概念验证实验,我们能够以小于3毫米的配准误差调制亮度和色度。使用我们的方法获得的物理对比度约为5000:1,而在环境光下观看3D打印输出的情况下为5:1,使用投影仪在常规屏幕上投射图像的情况下为1000:1。
{"title":"3D high dynamic range display system","authors":"Saeko Shimazu, D. Iwai, Kosuke Sato","doi":"10.1109/ISMAR.2011.6092393","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092393","url":null,"abstract":"This paper introduces a new high dynamic range (HDR) display system that generates a physical 3D HDR image without using stereoscopic methods. To boost contrast beyond that obtained using either a hardcopy or a projector, we employ a multiprojection system to superimpose images onto a textured solid hardcopy that is output by a 3D printer or a rapid prototyping machine. We introduce two basic techniques for our 3D HDR display. The first technique computes an optimal placement of projectors so that projected images cover the hardcopy's entire surface while maximizing image quality. The second technique allows a user to place the projectors near the computed optimal position by projecting from each projector images that act as visual guides. Through proof-of-concept experiments, we were able to modulate luminance and chrominance with a registration error of less than 3 mm. The physical contrast ratio obtained using our method was approximately 5,000:1, while it was 5:1 in the case of viewing the 3D printout under environmental light and 1,000:1 in the case of using the projectors to project the image on regular screens.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114396577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Graph-cut-based 3D model segmentation for articulated object reconstruction 基于图形切割的三维模型分割用于铰接对象重建
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092397
Inkyu Han, Hyoungnyoun Kim, Ji-Hyung Park
The three-dimensional (3D) reconstruction of objects has been well studied in the literature of augmented reality (AR) [1, 2]. Most existing studies have assumed that the to-be-constructed target object is rigid, whereas objects in the real world can be dynamic or deformable. Therefore, AR systems are required to deal with non-rigid objects to be adaptive to environmental changes. In this paper, we address the problem of reconstructing articulated objects as a starting point for modeling deformable objects. An articulated object is composed of partially rigid components linked with joints. After building a mesh model of the object, the model is segmented into the components along their boundaries by a graph-cut-based approach that we propose.
增强现实(AR)的文献已经对物体的三维(3D)重建进行了很好的研究[1,2]。现有的大多数研究都假设要构建的目标物体是刚性的,而现实世界中的物体可以是动态的或可变形的。因此,AR系统需要处理非刚性物体以适应环境变化。在本文中,我们解决了重建铰接对象的问题,作为建模可变形对象的起点。铰接物体是由部分刚性部件与关节连接而成的。在建立对象的网格模型之后,通过我们提出的基于图形切割的方法将模型沿其边界分割成组件。
{"title":"Graph-cut-based 3D model segmentation for articulated object reconstruction","authors":"Inkyu Han, Hyoungnyoun Kim, Ji-Hyung Park","doi":"10.1109/ISMAR.2011.6092397","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092397","url":null,"abstract":"The three-dimensional (3D) reconstruction of objects has been well studied in the literature of augmented reality (AR) [1, 2]. Most existing studies have assumed that the to-be-constructed target object is rigid, whereas objects in the real world can be dynamic or deformable. Therefore, AR systems are required to deal with non-rigid objects to be adaptive to environmental changes. In this paper, we address the problem of reconstructing articulated objects as a starting point for modeling deformable objects. An articulated object is composed of partially rigid components linked with joints. After building a mesh model of the object, the model is segmented into the components along their boundaries by a graph-cut-based approach that we propose.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"91 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126125920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust planar target tracking and pose estimation from a single concavity 鲁棒平面目标跟踪和单凸姿估计
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092365
M. Donoser, P. Kontschieder, H. Bischof
In this paper we introduce a novel real-time method to track weakly textured planar objects and to simultaneously estimate their 3D pose. The basic idea is to adapt the classic tracking-by-detection approach, which seeks for the object to be tracked independently in each frame, for tracking non-textured objects. In order to robustly estimate the 3D pose of such objects in each frame, we have to tackle three demanding problems. First, we need to find a stable representation of the object which is discriminable against the background and highly repetitive. Second, we have to robustly relocate this representation in every frame, also during considerable viewpoint changes. Finally, we have to estimate the pose from a single, closed object contour. Of course, all demands shall be accommodated at low computational costs and in real-time. To attack the above mentioned problems, we propose to exploit the properties of Maximally Stable Extremal Regions (MSERs) for detecting the required contours in an efficient manner and to apply random ferns as efficient and robust classifier for tracking. To estimate the 3D pose, we construct a perspectively invariant frame on the closed contour which is intrinsically provided by the extracted MSER. In our experiments we obtain robust tracking results with accurate poses on various challenging image sequences at a single requirement: One MSER used for tracking has to have at least one concavity that sufficiently deviates from its convex hull.
本文介绍了一种实时跟踪弱纹理平面物体并同时估计其三维姿态的新方法。其基本思想是将经典的检测跟踪方法(在每帧中寻找要独立跟踪的对象)用于跟踪非纹理对象。为了在每一帧中稳健地估计这些物体的三维姿态,我们必须解决三个苛刻的问题。首先,我们需要找到一个稳定的对象表示,它对背景具有可辨别性,并且高度重复。其次,我们必须在每一帧中健壮地重新定位这种表示,在相当大的视点变化期间也是如此。最后,我们必须从一个单一的,封闭的物体轮廓估计姿态。当然,所有的需求都必须以低计算成本和实时的方式来满足。为了解决上述问题,我们建议利用最大稳定极值区域(mser)的特性来有效地检测所需的轮廓,并应用随机蕨类作为高效和鲁棒的分类器进行跟踪。为了估计三维姿态,我们在封闭轮廓上构造一个透视不变的帧,该帧本质上是由提取的MSER提供的。在我们的实验中,我们在一个单一的要求下,在各种具有挑战性的图像序列上获得了准确姿态的鲁棒跟踪结果:用于跟踪的一个MSER必须至少有一个足以偏离其凸壳的凹度。
{"title":"Robust planar target tracking and pose estimation from a single concavity","authors":"M. Donoser, P. Kontschieder, H. Bischof","doi":"10.1109/ISMAR.2011.6092365","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092365","url":null,"abstract":"In this paper we introduce a novel real-time method to track weakly textured planar objects and to simultaneously estimate their 3D pose. The basic idea is to adapt the classic tracking-by-detection approach, which seeks for the object to be tracked independently in each frame, for tracking non-textured objects. In order to robustly estimate the 3D pose of such objects in each frame, we have to tackle three demanding problems. First, we need to find a stable representation of the object which is discriminable against the background and highly repetitive. Second, we have to robustly relocate this representation in every frame, also during considerable viewpoint changes. Finally, we have to estimate the pose from a single, closed object contour. Of course, all demands shall be accommodated at low computational costs and in real-time. To attack the above mentioned problems, we propose to exploit the properties of Maximally Stable Extremal Regions (MSERs) for detecting the required contours in an efficient manner and to apply random ferns as efficient and robust classifier for tracking. To estimate the 3D pose, we construct a perspectively invariant frame on the closed contour which is intrinsically provided by the extracted MSER. In our experiments we obtain robust tracking results with accurate poses on various challenging image sequences at a single requirement: One MSER used for tracking has to have at least one concavity that sufficiently deviates from its convex hull.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"12 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132953990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
User experiences with augmented reality aided navigation on phones 手机上增强现实辅助导航的用户体验
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092390
Alessandro Mulloni, H. Seichter, D. Schmalstieg
We investigate user experiences when using augmented reality (AR) as a new aid to navigation. We integrate AR with other more common interfaces into a handheld navigation system, and we conduct an exploratory study to see where and how people exploit AR. Based on previous work on augmented photographs, we hypothesize that AR is used more to support wayfinding at static locations when users approach a road intersection. In partial contrast to this hypothesis, our results from a user evaluation hint that users will expect to use the system while walking. Further, our results also show that AR is usually exploited shortly before and after road intersections, suggesting that tracking support will be mostly needed in proximity of road intersections.
我们调查用户体验时,使用增强现实(AR)作为一种新的辅助导航。我们将AR与其他更常见的接口集成到手持导航系统中,并进行了一项探索性研究,以了解人们在哪里以及如何利用AR。基于先前对增强照片的研究,我们假设AR更多地用于支持用户接近十字路口时静态位置的寻路。与这一假设形成部分对比的是,我们的用户评估结果暗示,用户将期望在行走时使用该系统。此外,我们的研究结果还表明,AR通常在十字路口前后不久被利用,这表明在十字路口附近最需要跟踪支持。
{"title":"User experiences with augmented reality aided navigation on phones","authors":"Alessandro Mulloni, H. Seichter, D. Schmalstieg","doi":"10.1109/ISMAR.2011.6092390","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092390","url":null,"abstract":"We investigate user experiences when using augmented reality (AR) as a new aid to navigation. We integrate AR with other more common interfaces into a handheld navigation system, and we conduct an exploratory study to see where and how people exploit AR. Based on previous work on augmented photographs, we hypothesize that AR is used more to support wayfinding at static locations when users approach a road intersection. In partial contrast to this hypothesis, our results from a user evaluation hint that users will expect to use the system while walking. Further, our results also show that AR is usually exploited shortly before and after road intersections, suggesting that tracking support will be mostly needed in proximity of road intersections.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134122816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Adaptive substrate for enhanced spatial augmented reality contrast and resolution 用于增强空间增强现实对比度和分辨率的自适应衬底
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092401
Markus Broecker, Ross T. Smith, B. Thomas
This poster presents the concept of combining two display technologies to enhance graphics effects in spatial augmented reality (SAR) environments. This is achieved by using an ePaper surface as an adaptive substrate instead of a white painted surface allowing the development of novel image techniques to improve image quality and object appearance in projector-based SAR environments.
这张海报展示了结合两种显示技术来增强空间增强现实(SAR)环境中的图形效果的概念。这是通过使用电子纸表面作为自适应基板而不是白色涂漆表面来实现的,从而允许开发新的图像技术,以提高基于投影仪的SAR环境中的图像质量和物体外观。
{"title":"Adaptive substrate for enhanced spatial augmented reality contrast and resolution","authors":"Markus Broecker, Ross T. Smith, B. Thomas","doi":"10.1109/ISMAR.2011.6092401","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092401","url":null,"abstract":"This poster presents the concept of combining two display technologies to enhance graphics effects in spatial augmented reality (SAR) environments. This is achieved by using an ePaper surface as an adaptive substrate instead of a white painted surface allowing the development of novel image techniques to improve image quality and object appearance in projector-based SAR environments.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134189612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Encumbrance-free telepresence system with real-time 3D capture and display using commodity depth cameras 无阻碍的远程呈现系统,实时3D捕获和显示使用商品深度相机
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092379
Andrew Maimone, H. Fuchs
This paper introduces a proof-of-concept telepresence system that offers fully dynamic, real-time 3D scene capture and continuous-viewpoint, head-tracked stereo 3D display without requiring the user to wear any tracking or viewing apparatus. We present a complete software and hardware framework for implementing the system, which is based on an array of commodity Microsoft Kinect™color-plus-depth cameras. Novel contributions include an algorithm for merging data between multiple depth cameras and techniques for automatic color calibration and preserving stereo quality even with low rendering rates. Also presented is a solution to the problem of interference that occurs between Kinect cameras with overlapping views. Emphasis is placed on a fully GPU-accelerated data processing and rendering pipeline that can apply hole filling, smoothing, data merger, surface generation, and color correction at rates of up to 100 million triangles/sec on a single PC and graphics board. Also presented is a Kinect-based marker-less tracking system that combines 2D eye recognition with depth information to allow head-tracked stereo views to be rendered for a parallax barrier autostereoscopic display. Our system is affordable and reproducible, offering the opportunity to easily deliver 3D telepresence beyond the researcher's lab.
本文介绍了一种概念验证远程呈现系统,该系统提供完全动态,实时3D场景捕获和连续视点,头部跟踪立体3D显示,而无需用户佩戴任何跟踪或观看设备。我们提出了一个完整的软件和硬件框架来实现系统,它是基于一系列商品微软Kinect™彩色加深度相机。新颖的贡献包括在多个深度相机之间合并数据的算法和自动颜色校准技术,即使在低渲染率下也能保持立体质量。此外,还提出了一种解决Kinect摄像头之间出现重叠视图的干扰问题的方法。重点放在完全gpu加速的数据处理和渲染管道上,可以在单个PC和图形板上以高达1亿个三角形/秒的速率应用孔填充,平滑,数据合并,表面生成和颜色校正。此外,还介绍了一种基于kinect的无标记跟踪系统,该系统将2D眼睛识别与深度信息相结合,允许为视差屏障自动立体显示呈现头部跟踪的立体视图。我们的系统价格合理且可复制,提供了在研究人员实验室之外轻松提供3D远程呈现的机会。
{"title":"Encumbrance-free telepresence system with real-time 3D capture and display using commodity depth cameras","authors":"Andrew Maimone, H. Fuchs","doi":"10.1109/ISMAR.2011.6092379","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092379","url":null,"abstract":"This paper introduces a proof-of-concept telepresence system that offers fully dynamic, real-time 3D scene capture and continuous-viewpoint, head-tracked stereo 3D display without requiring the user to wear any tracking or viewing apparatus. We present a complete software and hardware framework for implementing the system, which is based on an array of commodity Microsoft Kinect™color-plus-depth cameras. Novel contributions include an algorithm for merging data between multiple depth cameras and techniques for automatic color calibration and preserving stereo quality even with low rendering rates. Also presented is a solution to the problem of interference that occurs between Kinect cameras with overlapping views. Emphasis is placed on a fully GPU-accelerated data processing and rendering pipeline that can apply hole filling, smoothing, data merger, surface generation, and color correction at rates of up to 100 million triangles/sec on a single PC and graphics board. Also presented is a Kinect-based marker-less tracking system that combines 2D eye recognition with depth information to allow head-tracked stereo views to be rendered for a parallax barrier autostereoscopic display. Our system is affordable and reproducible, offering the opportunity to easily deliver 3D telepresence beyond the researcher's lab.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134391616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 239
Adaptive camera-based color mapping for mixed-reality applications 混合现实应用中基于自适应摄像头的色彩映射
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092382
Martin Knecht, C. Traxler, W. Purgathofer, M. Wimmer
We present a novel adaptive color mapping method for virtual objects in mixed-reality environments. In several mixed-reality applications, added virtual objects should be visually indistinguishable from real objects. Recent mixed-reality methods use global-illumination algorithms to approach this goal. However, simulating the light distribution is not enough for visually plausible images. Since the observing camera has its very own transfer function from real-world radiance values to RGB colors, virtual objects look artificial just because their rendered colors do not match with those of the camera. Our approach combines an on-line camera characterization method with a heuristic to map colors of virtual objects to colors as they would be seen by the observing camera. Previous tone-mapping functions were not designed for use in mixed-reality systems and thus did not take the camera-specific behavior into account. In contrast, our method takes the camera into account and thus can also handle changes of its parameters during runtime. The results show that virtual objects look visually more plausible than by just applying tone-mapping operators.
提出了一种新的混合现实环境中虚拟物体的自适应颜色映射方法。在一些混合现实应用程序中,添加的虚拟对象应该在视觉上与真实对象无法区分。最近的混合现实方法使用全局照明算法来实现这一目标。然而,模拟光的分布是不够的视觉上可信的图像。由于观察相机有自己的从真实世界的亮度值到RGB颜色的传递函数,虚拟物体看起来是人造的,只是因为它们的渲染颜色与相机的颜色不匹配。我们的方法结合了在线相机表征方法和启发式方法,将虚拟物体的颜色映射为观察相机所看到的颜色。以前的色调映射功能不是为混合现实系统设计的,因此没有考虑到相机特定的行为。相比之下,我们的方法考虑了相机,因此也可以在运行时处理其参数的变化。结果表明,虚拟对象在视觉上看起来比仅仅使用色调映射算子更可信。
{"title":"Adaptive camera-based color mapping for mixed-reality applications","authors":"Martin Knecht, C. Traxler, W. Purgathofer, M. Wimmer","doi":"10.1109/ISMAR.2011.6092382","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092382","url":null,"abstract":"We present a novel adaptive color mapping method for virtual objects in mixed-reality environments. In several mixed-reality applications, added virtual objects should be visually indistinguishable from real objects. Recent mixed-reality methods use global-illumination algorithms to approach this goal. However, simulating the light distribution is not enough for visually plausible images. Since the observing camera has its very own transfer function from real-world radiance values to RGB colors, virtual objects look artificial just because their rendered colors do not match with those of the camera. Our approach combines an on-line camera characterization method with a heuristic to map colors of virtual objects to colors as they would be seen by the observing camera. Previous tone-mapping functions were not designed for use in mixed-reality systems and thus did not take the camera-specific behavior into account. In contrast, our method takes the camera into account and thus can also handle changes of its parameters during runtime. The results show that virtual objects look visually more plausible than by just applying tone-mapping operators.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114886367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Using egocentric vision to achieve robust inertial body tracking under magnetic disturbances 利用自中心视觉实现磁干扰下的鲁棒惯性体跟踪
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092528
G. Bleser, Gustaf Hendeby, M. Miezal
In the context of a smart user assistance system for industrial manipulation tasks it is necessary to capture motions of the upper body and limbs of the worker in order to derive his or her interactions with the task space. While such capturing technology already exists, the novelty of the proposed work results from the strong requirements of the application context: The method should be flexible and use only on-body sensors, work accurately in industrial environments that suffer from severe magnetic disturbances, and enable consistent registration between the user body frame and the task space. Currently available systems cannot provide this. This paper suggests a novel egocentric solution for visual-inertial upper-body motion tracking based on recursive filtering and model-based sensor fusion. Visual detections of the wrists in the images of a chest-mounted camera are used as substitute for the commonly used magnetometer measurements. The on-body sensor network, the motion capturing system, and the required calibration procedure are described and successful operation is shown in a real industrial environment.
在工业操作任务的智能用户辅助系统的背景下,有必要捕获工人的上半身和四肢的运动,以便派生他或她与任务空间的交互。虽然这种捕获技术已经存在,但所提出的工作的新颖性来自于应用环境的强烈要求:该方法应该是灵活的,只使用身体上的传感器,在遭受严重磁干扰的工业环境中准确工作,并在用户身体框架和任务空间之间实现一致的注册。当前可用的系统无法提供此功能。提出了一种基于递归滤波和基于模型的传感器融合的视觉惯性上体运动跟踪自中心算法。在安装在胸前的相机图像中对手腕的视觉检测被用来代替常用的磁力计测量。描述了人体传感器网络、运动捕捉系统和所需的校准程序,并在实际工业环境中成功运行。
{"title":"Using egocentric vision to achieve robust inertial body tracking under magnetic disturbances","authors":"G. Bleser, Gustaf Hendeby, M. Miezal","doi":"10.1109/ISMAR.2011.6092528","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092528","url":null,"abstract":"In the context of a smart user assistance system for industrial manipulation tasks it is necessary to capture motions of the upper body and limbs of the worker in order to derive his or her interactions with the task space. While such capturing technology already exists, the novelty of the proposed work results from the strong requirements of the application context: The method should be flexible and use only on-body sensors, work accurately in industrial environments that suffer from severe magnetic disturbances, and enable consistent registration between the user body frame and the task space. Currently available systems cannot provide this. This paper suggests a novel egocentric solution for visual-inertial upper-body motion tracking based on recursive filtering and model-based sensor fusion. Visual detections of the wrists in the images of a chest-mounted camera are used as substitute for the commonly used magnetometer measurements. The on-body sensor network, the motion capturing system, and the required calibration procedure are described and successful operation is shown in a real industrial environment.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130495613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
期刊
2011 10th IEEE International Symposium on Mixed and Augmented Reality
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1