首页 > 最新文献

2011 10th IEEE International Symposium on Mixed and Augmented Reality最新文献

英文 中文
Accurate and robust planar tracking based on a model of image sampling and reconstruction process 基于图像采样和重建过程模型的精确鲁棒平面跟踪
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092364
Eisuke Ito, Takayuki Okatani, K. Deguchi
It is one of the central issues in augmented reality and computer vision to track a planar object moving relatively to a camera in an accurate and robust manner. In previous studies, it was pointed out that there are several factors making the tracking difficult, such as illumination change and motion blur, and effective solutions were proposed for them. In this paper, we point out that degradation in effective image resolution can also deteriorate tracking performance, which typically occurs when the plane being tracked has an oblique pose with respect to the viewing direction, or when it moves to a distant location from the camera. The deterioration tends to become significantly large for extreme configurations, e.g., when the planar object has nearly a right angle with the viewing direction. Such configurations can frequently occur in AR applications targeted at ordinary users. To cope with this problem, we model the sampling and reconstruction process of images, and present a tracking algorithm that incorporates the model to correctly handle these configurations. We show through several experiments that the proposed method shows better performance than conventional methods.
如何准确、鲁棒地跟踪相对于摄像机运动的平面物体是增强现实和计算机视觉的核心问题之一。在以往的研究中,指出了造成跟踪困难的几个因素,如光照变化和运动模糊,并提出了有效的解决方案。在本文中,我们指出,有效图像分辨率的下降也会降低跟踪性能,这通常发生在被跟踪的飞机相对于观看方向具有倾斜姿态时,或者当它移动到远离相机的位置时。对于极端配置,例如,当平面物体与观察方向几乎成直角时,退化倾向于变得非常大。这种配置经常出现在针对普通用户的AR应用程序中。为了解决这一问题,我们对图像的采样和重建过程进行了建模,并提出了一种结合模型的跟踪算法来正确处理这些配置。实验表明,该方法比传统方法具有更好的性能。
{"title":"Accurate and robust planar tracking based on a model of image sampling and reconstruction process","authors":"Eisuke Ito, Takayuki Okatani, K. Deguchi","doi":"10.1109/ISMAR.2011.6092364","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092364","url":null,"abstract":"It is one of the central issues in augmented reality and computer vision to track a planar object moving relatively to a camera in an accurate and robust manner. In previous studies, it was pointed out that there are several factors making the tracking difficult, such as illumination change and motion blur, and effective solutions were proposed for them. In this paper, we point out that degradation in effective image resolution can also deteriorate tracking performance, which typically occurs when the plane being tracked has an oblique pose with respect to the viewing direction, or when it moves to a distant location from the camera. The deterioration tends to become significantly large for extreme configurations, e.g., when the planar object has nearly a right angle with the viewing direction. Such configurations can frequently occur in AR applications targeted at ordinary users. To cope with this problem, we model the sampling and reconstruction process of images, and present a tracking algorithm that incorporates the model to correctly handle these configurations. We show through several experiments that the proposed method shows better performance than conventional methods.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121733158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Deformable random dot markers 可变形随机点标记
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092394
Hideaki Uchiyama, É. Marchand
We extend planar fiducial markers using random dots [8] to nonrigidly deformable markers. Because the recognition and tracking of random dot markers are based on keypoint matching, we can estimate the deformation of the markers with nonrigid surface detection from keypoint correspondences. First, the initial pose of the markers is computed from a homography with RANSAC as a planar detection. Second, deformations are estimated from the minimization of a cost function for deformable surface fitting. We show augmentation results of 2D surface deformation recovery with several markers.
我们使用随机点[8]将平面基准标记扩展到非刚性可变形标记。由于随机点标记的识别和跟踪是基于关键点匹配的,我们可以通过关键点对应来估计非刚性表面检测标记的变形。首先,使用RANSAC作为平面检测,从单应性中计算标记的初始姿态。其次,根据可变形曲面拟合的代价函数的最小化来估计变形。我们展示了几种标记物对二维表面变形恢复的增强结果。
{"title":"Deformable random dot markers","authors":"Hideaki Uchiyama, É. Marchand","doi":"10.1109/ISMAR.2011.6092394","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092394","url":null,"abstract":"We extend planar fiducial markers using random dots [8] to nonrigidly deformable markers. Because the recognition and tracking of random dot markers are based on keypoint matching, we can estimate the deformation of the markers with nonrigid surface detection from keypoint correspondences. First, the initial pose of the markers is computed from a homography with RANSAC as a planar detection. Second, deformations are estimated from the minimization of a cost function for deformable surface fitting. We show augmentation results of 2D surface deformation recovery with several markers.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121108051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Image-based clothes transfer 基于图像的衣服转移
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092383
Stefan Hauswiesner, M. Straka, Gerhard Reitmayr
Virtual dressing rooms for the fashion industry and digital entertainment applications aim at creating an image or a video of a user in which he or she wears different garments than in the real world. Such images can be displayed, for example, in a magic mirror shopping application or in games and movies. Current solutions involve the error-prone task of body pose tracking. We suggest an approach that allows users who are captured by a set of cameras to be virtually dressed with previously recorded garments in 3D. By using image-based algorithms, we can bypass critical components of other systems, especially tracking based on skeleton models. We rather transfer the appearance of a garment from one user to another by image processing and image-based rendering. Using images of real garments allows for photo-realistic rendering quality with high performance.
时尚行业和数字娱乐应用程序的虚拟试衣间旨在创建用户穿着与现实世界不同的服装的图像或视频。例如,可以在魔镜购物应用程序或游戏和电影中显示这样的图像。目前的解决方案涉及容易出错的身体姿势跟踪任务。我们建议一种方法,允许被一组摄像机捕获的用户在3D中虚拟地穿着先前记录的服装。通过使用基于图像的算法,我们可以绕过其他系统的关键组件,特别是基于骨骼模型的跟踪。我们宁愿通过图像处理和基于图像的渲染将服装的外观从一个用户转移到另一个用户。使用真实服装的图像可以实现具有高性能的逼真渲染质量。
{"title":"Image-based clothes transfer","authors":"Stefan Hauswiesner, M. Straka, Gerhard Reitmayr","doi":"10.1109/ISMAR.2011.6092383","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092383","url":null,"abstract":"Virtual dressing rooms for the fashion industry and digital entertainment applications aim at creating an image or a video of a user in which he or she wears different garments than in the real world. Such images can be displayed, for example, in a magic mirror shopping application or in games and movies. Current solutions involve the error-prone task of body pose tracking. We suggest an approach that allows users who are captured by a set of cameras to be virtually dressed with previously recorded garments in 3D. By using image-based algorithms, we can bypass critical components of other systems, especially tracking based on skeleton models. We rather transfer the appearance of a garment from one user to another by image processing and image-based rendering. Using images of real garments allows for photo-realistic rendering quality with high performance.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115793437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Virtual transparency: Introducing parallax view into video see-through AR 虚拟透明:将视差视图引入视频透视AR
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092395
A. Hill, Jacob Schiefer, Jeff Wilson, Brian Davidson, Maribeth Gandy Coleman, B. MacIntyre
In this poster, we present the idea of “virtual transparency” for video see-through AR. In fully synthetic 3D graphics, head-tracked motion parallax has been shown to be a powerful depth cue for understanding the structure of the virtual world. To leverage head-tracked motion parallax in video see-through AR, the view of the virtual and physical world must change together in response to head motion. We present a system for accomplishing this, and discuss the benefits and limitations of our approach.
在这张海报中,我们提出了视频透明AR的“虚拟透明度”概念。在完全合成的3D图形中,头部跟踪运动视差已被证明是理解虚拟世界结构的强大深度线索。为了在视频透视AR中利用头部跟踪的运动视差,虚拟世界和物理世界的视图必须根据头部运动一起改变。我们提出了一个实现这一目标的系统,并讨论了我们方法的优点和局限性。
{"title":"Virtual transparency: Introducing parallax view into video see-through AR","authors":"A. Hill, Jacob Schiefer, Jeff Wilson, Brian Davidson, Maribeth Gandy Coleman, B. MacIntyre","doi":"10.1109/ISMAR.2011.6092395","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092395","url":null,"abstract":"In this poster, we present the idea of “virtual transparency” for video see-through AR. In fully synthetic 3D graphics, head-tracked motion parallax has been shown to be a powerful depth cue for understanding the structure of the virtual world. To leverage head-tracked motion parallax in video see-through AR, the view of the virtual and physical world must change together in response to head motion. We present a system for accomplishing this, and discuss the benefits and limitations of our approach.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126164732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Out of reach? — A novel AR interface approach for motor rehabilitation 够不着?-一种用于运动康复的新型AR界面方法
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092389
H. Regenbrecht, G. McGregor, Claudia Ott, S. Hoermann, Thomas W. Schubert, L. Hale, Julia Hoermann, Brian Dixon, E. Franz
Mixed reality rehabilitation systems and games are demonstrating potential as innovative adjunctive therapies for health professionals in their treatment of various hand and upper limb motor impairments. Unilateral motor deficits of the arm, for example, are commonly experienced post stroke. Our TheraMem system provides an augmented reality game environment that contributes to this increasingly rich area of research. We present a prototype system which “fools the brain” by visually amplifying users' hand movements — small actual hand movements lead to perceived larger movements. We validate the usability of our system in an empirical study with forty-five non-clinical participants. In addition, we present early qualitative evidence for the utility of our approach and system for stroke recovery and motor rehabilitation. Future uses of the system are considered by way of conclusion.
混合现实康复系统和游戏在卫生专业人员治疗各种手部和上肢运动障碍方面显示出作为创新辅助疗法的潜力。例如,中风后通常会出现手臂单侧运动障碍。我们的TheraMem系统提供了一个增强现实游戏环境,有助于这一日益丰富的研究领域。我们提出了一个原型系统,通过视觉放大用户的手部运动来“愚弄大脑”——实际的手部小运动导致感知到更大的运动。我们在45名非临床参与者的实证研究中验证了我们系统的可用性。此外,我们提出了我们的方法和系统在中风恢复和运动康复中的效用的早期定性证据。以结论的方式考虑了系统的未来用途。
{"title":"Out of reach? — A novel AR interface approach for motor rehabilitation","authors":"H. Regenbrecht, G. McGregor, Claudia Ott, S. Hoermann, Thomas W. Schubert, L. Hale, Julia Hoermann, Brian Dixon, E. Franz","doi":"10.1109/ISMAR.2011.6092389","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092389","url":null,"abstract":"Mixed reality rehabilitation systems and games are demonstrating potential as innovative adjunctive therapies for health professionals in their treatment of various hand and upper limb motor impairments. Unilateral motor deficits of the arm, for example, are commonly experienced post stroke. Our TheraMem system provides an augmented reality game environment that contributes to this increasingly rich area of research. We present a prototype system which “fools the brain” by visually amplifying users' hand movements — small actual hand movements lead to perceived larger movements. We validate the usability of our system in an empirical study with forty-five non-clinical participants. In addition, we present early qualitative evidence for the utility of our approach and system for stroke recovery and motor rehabilitation. Future uses of the system are considered by way of conclusion.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"2014 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121066318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
Providing guidance for maintenance operations using automatic markerless Augmented Reality system 使用自动无标记增强现实系统为维护操作提供指导
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092385
H. Álvarez, I. Aguinaga, D. Borro
This paper proposes a new real-time Augmented Reality based tool to help in disassembly for maintenance operations. This tool provides workers with augmented instructions to perform maintenance tasks more efficiently. Our prototype is a complete framework characterized by its capability to automatically generate all the necessary data from input based on untextured 3D triangle meshes, without requiring additional user intervention. An automatic offline stage extracts the basic geometric features. These are used during the online stage to compute the camera pose from a monocular image. Thus, we can handle the usual textureless 3D models used in industrial applications. A self-supplied and robust markerless tracking system that combines an edge tracker, a point based tracker and a 3D particle filter has also been designed to continuously update the camera pose. Our framework incorporates an automatic path-planning module. During the offline stage, the assembly/disassembly sequence is automatically deduced from the 3D model geometry. This information is used to generate the disassembly instructions for workers.
本文提出了一种新的实时增强现实工具,以帮助拆卸维修操作。该工具为工作人员提供了增强的指令,以更有效地执行维护任务。我们的原型是一个完整的框架,其特点是能够根据无纹理的3D三角形网格自动生成所有必要的数据,而不需要额外的用户干预。自动脱机阶段提取基本几何特征。这些在在线阶段用于从单目图像计算相机姿势。因此,我们可以处理通常在工业应用中使用的无纹理3D模型。一个自给自足和强大的无标记跟踪系统,结合了一个边缘跟踪器,一个基于点的跟踪器和一个3D粒子过滤器,也被设计为不断更新相机姿态。我们的框架包含一个自动路径规划模块。在离线阶段,从三维模型几何图形中自动推导出装配/拆卸顺序。该信息用于为工人生成拆卸指令。
{"title":"Providing guidance for maintenance operations using automatic markerless Augmented Reality system","authors":"H. Álvarez, I. Aguinaga, D. Borro","doi":"10.1109/ISMAR.2011.6092385","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092385","url":null,"abstract":"This paper proposes a new real-time Augmented Reality based tool to help in disassembly for maintenance operations. This tool provides workers with augmented instructions to perform maintenance tasks more efficiently. Our prototype is a complete framework characterized by its capability to automatically generate all the necessary data from input based on untextured 3D triangle meshes, without requiring additional user intervention. An automatic offline stage extracts the basic geometric features. These are used during the online stage to compute the camera pose from a monocular image. Thus, we can handle the usual textureless 3D models used in industrial applications. A self-supplied and robust markerless tracking system that combines an edge tracker, a point based tracker and a 3D particle filter has also been designed to continuously update the camera pose. Our framework incorporates an automatic path-planning module. During the offline stage, the assembly/disassembly sequence is automatically deduced from the 3D model geometry. This information is used to generate the disassembly instructions for workers.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"21 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128535502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
Interactive visualization technique for truthful color reproduction in spatial augmented reality applications 空间增强现实应用中真实色彩再现的交互式可视化技术
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092381
Christoffer Menk, R. Koch
Spatial augmented reality is especially interesting for the design process of a car, because a lot of virtual content and corresponding real objects are used. One important issue in such a process is that the designer can trust the visualized colors on the real object, because design decisions are made on basis of the projection. In this article, we present an interactive visualization technique which is able to exactly compute the RGB values for the projected image, so that the resulting colors on the real object are equally perceived as the real desired colors. Our approach computes the influences of the ambient light, the material, the pose and the color model of the projector to the resulting colors of the projected RGB values by using a physically-based computation. This information allows us to compute the adjustment for the RGB values for varying projector positions at interactive rates. Since the amount of projectable colors does not only depend on the material and the ambient light, but also on the pose of the projector, our method can be used to interactively adjust the range of projectable colors by moving the projector to arbitrary positions around the real object. The proposed method is evaluated in a number of experiments.
空间增强现实在汽车的设计过程中特别有趣,因为使用了大量的虚拟内容和相应的真实物体。在这个过程中一个重要的问题是,设计师可以信任真实物体上的视觉颜色,因为设计决策是基于投影做出的。在本文中,我们提出了一种交互式可视化技术,该技术能够精确地计算投影图像的RGB值,从而使真实物体上的结果颜色与真实期望的颜色相同。我们的方法通过使用基于物理的计算来计算环境光、材料、姿态和投影仪的颜色模型对投影RGB值的结果颜色的影响。这个信息允许我们计算在交互速率下不同投影机位置的RGB值的调整。由于可投射颜色的数量不仅取决于材料和环境光,还取决于投影机的姿势,我们的方法可以通过将投影机移动到真实物体周围的任意位置来交互式地调整可投射颜色的范围。该方法在一系列实验中得到了验证。
{"title":"Interactive visualization technique for truthful color reproduction in spatial augmented reality applications","authors":"Christoffer Menk, R. Koch","doi":"10.1109/ISMAR.2011.6092381","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092381","url":null,"abstract":"Spatial augmented reality is especially interesting for the design process of a car, because a lot of virtual content and corresponding real objects are used. One important issue in such a process is that the designer can trust the visualized colors on the real object, because design decisions are made on basis of the projection. In this article, we present an interactive visualization technique which is able to exactly compute the RGB values for the projected image, so that the resulting colors on the real object are equally perceived as the real desired colors. Our approach computes the influences of the ambient light, the material, the pose and the color model of the projector to the resulting colors of the projected RGB values by using a physically-based computation. This information allows us to compute the adjustment for the RGB values for varying projector positions at interactive rates. Since the amount of projectable colors does not only depend on the material and the ambient light, but also on the pose of the projector, our method can be used to interactively adjust the range of projectable colors by moving the projector to arbitrary positions around the real object. The proposed method is evaluated in a number of experiments.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117056953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Real-time self-localization from panoramic images on mobile devices 从移动设备上的全景图像实时自定位
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092368
Clemens Arth, Manfred Klopschitz, Gerhard Reitmayr, D. Schmalstieg
Self-localization in large environments is a vital task for accurately registered information visualization in outdoor Augmented Reality (AR) applications. In this work, we present a system for self-localization on mobile phones using a GPS prior and an online-generated panoramic view of the user's environment. The approach is suitable for executing entirely on current generation mobile devices, such as smartphones. Parallel execution of online incremental panorama generation and accurate 6DOF pose estimation using 3D point reconstructions allows for real-time self-localization and registration in large-scale environments. The power of our approach is demonstrated in several experimental evaluations.
在户外增强现实(AR)应用中,大环境下的自定位是实现信息可视化的关键。在这项工作中,我们提出了一个在手机上使用GPS先验和在线生成的用户环境全景视图进行自我定位的系统。这种方法完全适用于当前的移动设备,如智能手机。并行执行在线增量全景生成和使用3D点重建的精确6DOF姿态估计允许在大规模环境中进行实时自定位和配准。我们的方法的力量在几个实验评估中得到了证明。
{"title":"Real-time self-localization from panoramic images on mobile devices","authors":"Clemens Arth, Manfred Klopschitz, Gerhard Reitmayr, D. Schmalstieg","doi":"10.1109/ISMAR.2011.6092368","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092368","url":null,"abstract":"Self-localization in large environments is a vital task for accurately registered information visualization in outdoor Augmented Reality (AR) applications. In this work, we present a system for self-localization on mobile phones using a GPS prior and an online-generated panoramic view of the user's environment. The approach is suitable for executing entirely on current generation mobile devices, such as smartphones. Parallel execution of online incremental panorama generation and accurate 6DOF pose estimation using 3D point reconstructions allows for real-time self-localization and registration in large-scale environments. The power of our approach is demonstrated in several experimental evaluations.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130535086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 97
Augmented reality in the psychomotor phase of a procedural task 增强现实在程序性任务的精神运动阶段
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092386
S. Henderson, Steven K. Feiner
Procedural tasks are common to many domains, ranging from maintenance and repair, to medicine, to the arts. We describe and evaluate a prototype augmented reality (AR) user interface designed to assist users in the relatively under-explored psychomotor phase of procedural tasks. In this phase, the user begins physical manipulations, and thus alters aspects of the underlying task environment. Our prototype tracks the user and multiple components in a typical maintenance assembly task, and provides dynamic, prescriptive, overlaid instructions on a see-through head-worn display in response to the user's ongoing activity. A user study shows participants were able to complete psychomotor aspects of the assembly task significantly faster and with significantly greater accuracy than when using 3D-graphics-based assistance presented on a stationary LCD. Qualitative questionnaire results indicate that participants overwhelmingly preferred the AR condition, and ranked it as more intuitive than the LCD condition.
程序性任务在许多领域都很常见,从维护和修理到医学,再到艺术。我们描述和评估了一个原型增强现实(AR)用户界面,旨在帮助用户在程序性任务中相对未被探索的精神运动阶段。在这个阶段,用户开始进行物理操作,从而改变底层任务环境的各个方面。我们的原型在一个典型的维护组装任务中跟踪用户和多个组件,并在一个透明的头戴式显示器上提供动态的、规定性的、覆盖的指令,以响应用户正在进行的活动。一项用户研究表明,与使用固定LCD上的3d图形辅助相比,参与者能够更快、更准确地完成组装任务的精神运动方面。定性问卷调查结果表明,绝大多数参与者更喜欢AR条件,并将其评为比LCD条件更直观。
{"title":"Augmented reality in the psychomotor phase of a procedural task","authors":"S. Henderson, Steven K. Feiner","doi":"10.1109/ISMAR.2011.6092386","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092386","url":null,"abstract":"Procedural tasks are common to many domains, ranging from maintenance and repair, to medicine, to the arts. We describe and evaluate a prototype augmented reality (AR) user interface designed to assist users in the relatively under-explored psychomotor phase of procedural tasks. In this phase, the user begins physical manipulations, and thus alters aspects of the underlying task environment. Our prototype tracks the user and multiple components in a typical maintenance assembly task, and provides dynamic, prescriptive, overlaid instructions on a see-through head-worn display in response to the user's ongoing activity. A user study shows participants were able to complete psychomotor aspects of the assembly task significantly faster and with significantly greater accuracy than when using 3D-graphics-based assistance presented on a stationary LCD. Qualitative questionnaire results indicate that participants overwhelmingly preferred the AR condition, and ranked it as more intuitive than the LCD condition.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128494012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 246
MR in OR: First analysis of AR/VR visualization in 100 intra-operative Freehand SPECT acquisitions 手术中的MR:首次分析100例术中徒手SPECT图像的AR/VR可视化
Pub Date : 2011-10-26 DOI: 10.1109/ISMAR.2011.6092388
A. Okur, Seyed-Ahmad Ahmadi, A. Bigdelou, T. Wendler, Nassir Navab
For the past two decades, medical Augmented Reality visualization has been researched and prototype systems have been tested in laboratory setups and limited clinical trials. Up to our knowledge, until now, no commercial system incorporating Augmented Reality visualization has been developed and used routinely within the real-life surgical environment. In this paper, we are reporting on observations and analysis concerning the usage of a commercially developed and clinically approved Freehand SPECT system, which incorporates monitor-based Mixed Reality visualization, during real-life surgeries. The workflow-based analysis we present is focused on an atomic sub-task of sentinel lymph node biopsy. We analyzed the usage of the Augmented and Virtual Reality visualization modes by the surgical team, while leaving the staff completely uninfluenced and unbiased in order to capture the natural interaction with the system. We report on our observations in over 100 Freehand SPECT acquisitions within different phases of 52 surgeries.
在过去的二十年里,医学增强现实可视化已经得到了研究,原型系统已经在实验室设置和有限的临床试验中进行了测试。据我们所知,到目前为止,还没有商业系统结合增强现实可视化已经开发出来,并在现实手术环境中常规使用。在本文中,我们报告了关于在实际手术中使用商业开发和临床批准的徒手SPECT系统的观察和分析,该系统结合了基于监视器的混合现实可视化。我们提出的基于工作流程的分析集中在前哨淋巴结活检的原子子任务上。我们分析了外科团队对增强现实和虚拟现实可视化模式的使用情况,同时让工作人员完全不受影响和不带偏见,以捕捉与系统的自然交互。我们报告了我们在52例手术的不同阶段对100多个徒手SPECT采集的观察结果。
{"title":"MR in OR: First analysis of AR/VR visualization in 100 intra-operative Freehand SPECT acquisitions","authors":"A. Okur, Seyed-Ahmad Ahmadi, A. Bigdelou, T. Wendler, Nassir Navab","doi":"10.1109/ISMAR.2011.6092388","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092388","url":null,"abstract":"For the past two decades, medical Augmented Reality visualization has been researched and prototype systems have been tested in laboratory setups and limited clinical trials. Up to our knowledge, until now, no commercial system incorporating Augmented Reality visualization has been developed and used routinely within the real-life surgical environment. In this paper, we are reporting on observations and analysis concerning the usage of a commercially developed and clinically approved Freehand SPECT system, which incorporates monitor-based Mixed Reality visualization, during real-life surgeries. The workflow-based analysis we present is focused on an atomic sub-task of sentinel lymph node biopsy. We analyzed the usage of the Augmented and Virtual Reality visualization modes by the surgical team, while leaving the staff completely uninfluenced and unbiased in order to capture the natural interaction with the system. We report on our observations in over 100 Freehand SPECT acquisitions within different phases of 52 surgeries.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"9 Suppl 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123669300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
期刊
2011 10th IEEE International Symposium on Mixed and Augmented Reality
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1