首页 > 最新文献

2010 IEEE International Symposium on Mixed and Augmented Reality最新文献

英文 中文
Point-and-shoot for ubiquitous tagging on mobile phones 瞄准和拍摄无处不在的标签在手机上
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643551
Wonwoo Lee, Youngmin Park, V. Lepetit, Woontack Woo
We propose a novel way to augment a real scene with minimalist user intervention on a mobile phone: The user only has to point the phone camera to the desired location of the augmentation. Our method is valid for vertical or horizontal surfaces only, but this is not a restriction in practice in man-made environments, and avoids to go through any reconstruction of the 3D scene, which is still a delicate process. Our approach is inspired by recent work on perspective patch recognition [5] and we show how to modify it for better performances on mobile phones and how to exploit the phone accelerometers to relax the need for fronto-parallel views. In addition, our implementation allows to share the augmentations and the required data over peer-to-peer communication to build a shared AR space on mobile phones.
我们提出了一种新颖的方法,以最少的用户干预来增强手机上的真实场景:用户只需要将手机摄像头指向所需的增强位置。我们的方法只适用于垂直或水平的表面,但这在人工环境的实践中并不是一个限制,并且避免了任何3D场景的重建,这仍然是一个微妙的过程。我们的方法受到最近透视补丁识别[5]工作的启发,我们展示了如何修改它以在手机上获得更好的性能,以及如何利用手机加速计来放松对正面平行视图的需求。此外,我们的实现允许通过点对点通信共享增强功能和所需数据,从而在手机上构建共享的AR空间。
{"title":"Point-and-shoot for ubiquitous tagging on mobile phones","authors":"Wonwoo Lee, Youngmin Park, V. Lepetit, Woontack Woo","doi":"10.1109/ISMAR.2010.5643551","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643551","url":null,"abstract":"We propose a novel way to augment a real scene with minimalist user intervention on a mobile phone: The user only has to point the phone camera to the desired location of the augmentation. Our method is valid for vertical or horizontal surfaces only, but this is not a restriction in practice in man-made environments, and avoids to go through any reconstruction of the 3D scene, which is still a delicate process. Our approach is inspired by recent work on perspective patch recognition [5] and we show how to modify it for better performances on mobile phones and how to exploit the phone accelerometers to relax the need for fronto-parallel views. In addition, our implementation allows to share the augmentations and the required data over peer-to-peer communication to build a shared AR space on mobile phones.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129853570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Perceptual issues in augmented reality revisited 重新审视增强现实中的感知问题
Pub Date : 2010-11-22 DOI: 10.1109/ismar.2010.5643530
E. Kruijff, J. Swan, Steven K. Feiner
This paper provides a classification of perceptual issues in augmented reality, created with a visual processing and interpretation pipeline in mind. We organize issues into ones related to the environment, capturing, augmentation, display, and individual user differences. We also illuminate issues associated with more recent platforms such as handhelds or projector-camera systems. Throughout, we describe current approaches to addressing these problems, and suggest directions for future research.
本文提供了增强现实中感知问题的分类,创建了一个视觉处理和解释管道。我们将问题组织成与环境、捕获、增强、显示和个人用户差异相关的问题。我们还阐明了与最近的平台(如手持设备或投影相机系统)相关的问题。在整个过程中,我们描述了目前解决这些问题的方法,并提出了未来研究的方向。
{"title":"Perceptual issues in augmented reality revisited","authors":"E. Kruijff, J. Swan, Steven K. Feiner","doi":"10.1109/ismar.2010.5643530","DOIUrl":"https://doi.org/10.1109/ismar.2010.5643530","url":null,"abstract":"This paper provides a classification of perceptual issues in augmented reality, created with a visual processing and interpretation pipeline in mind. We organize issues into ones related to the environment, capturing, augmentation, display, and individual user differences. We also illuminate issues associated with more recent platforms such as handhelds or projector-camera systems. Throughout, we describe current approaches to addressing these problems, and suggest directions for future research.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127338474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 439
Keyframe-based modeling and tracking of multiple 3D objects 基于关键帧的建模和跟踪多个3D对象
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643569
Kiyoung Kim, V. Lepetit, Woontack Woo
We propose a real-time solution for modeling and tracking multiple 3D objects in unknown environments. Our contribution is two-fold: First, we show how to scale with the number of objects. This is done by combining recent techniques for image retrieval and online Structure from Motion, which can be run in parallel. As a result, tracking 40 objects in 3D can be done within 6 to 25 milliseconds per frame, even under difficult conditions for tracking. Second, we propose a method to let the user add new objects very quickly. The user simply has to select in an image a 2D region lying on the object. A 3D primitive is then fitted to the features within this region, and adjusted to create the object 3D model. In practice, this procedure takes less than a minute.
我们提出了一种在未知环境中建模和跟踪多个三维物体的实时解决方案。我们的贡献有两个方面:首先,我们展示了如何根据对象的数量进行缩放。这是通过结合最新的图像检索技术和在线运动结构技术来实现的,它们可以并行运行。因此,即使在困难的跟踪条件下,也可以在每帧6到25毫秒内跟踪40个3D对象。其次,我们提出了一种让用户快速添加新对象的方法。用户只需在图像中选择物体上的2D区域。然后将三维原语拟合到该区域内的特征上,并进行调整以创建对象的3D模型。实际上,这个过程只需要不到一分钟。
{"title":"Keyframe-based modeling and tracking of multiple 3D objects","authors":"Kiyoung Kim, V. Lepetit, Woontack Woo","doi":"10.1109/ISMAR.2010.5643569","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643569","url":null,"abstract":"We propose a real-time solution for modeling and tracking multiple 3D objects in unknown environments. Our contribution is two-fold: First, we show how to scale with the number of objects. This is done by combining recent techniques for image retrieval and online Structure from Motion, which can be run in parallel. As a result, tracking 40 objects in 3D can be done within 6 to 25 milliseconds per frame, even under difficult conditions for tracking. Second, we propose a method to let the user add new objects very quickly. The user simply has to select in an image a 2D region lying on the object. A 3D primitive is then fitted to the features within this region, and adjusted to create the object 3D model. In practice, this procedure takes less than a minute.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130062223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
Validating Spatial Augmented Reality for interactive rapid prototyping 验证空间增强现实交互式快速原型
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643599
Shane Porter, M. Marner, Ross T. Smith, J. Zucco, B. Thomas
This paper investigates the use of Spatial Augmented Reality in the prototyping of new human-machine interfaces, such as control panels or car dashboards. The prototyping system uses projectors to present the visual appearance of controls onto a mock-up of a product. Finger tracking is employed to allow real-time interactions with the controls. This technology can be used to quickly and inexpensively create and evaluate interface prototypes for devices. In the past, evaluating a prototype involved constructing a physical model of the device with working components such as buttons. We have conducted a user study to compare these two methods of prototyping and to validate the use of spatial augmented reality for rapid iterative interface prototyping. Participants of the study were required to press pairs of buttons in sequence and interaction times were measured. The results indicate that while slower, users can interact naturally with projected control panels.
本文研究了空间增强现实技术在新型人机界面(如控制面板或汽车仪表板)原型设计中的应用。原型系统使用投影仪将控件的视觉外观呈现在产品的模型上。手指追踪被用来实现与控制器的实时交互。该技术可用于快速、低成本地创建和评估设备的界面原型。在过去,评估一个原型需要构建一个设备的物理模型,包括按钮等工作部件。我们进行了一项用户研究来比较这两种原型方法,并验证空间增强现实用于快速迭代界面原型的使用。该研究要求参与者按顺序按成对的按钮,并测量交互时间。结果表明,虽然速度较慢,但用户可以自然地与投影控制面板进行交互。
{"title":"Validating Spatial Augmented Reality for interactive rapid prototyping","authors":"Shane Porter, M. Marner, Ross T. Smith, J. Zucco, B. Thomas","doi":"10.1109/ISMAR.2010.5643599","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643599","url":null,"abstract":"This paper investigates the use of Spatial Augmented Reality in the prototyping of new human-machine interfaces, such as control panels or car dashboards. The prototyping system uses projectors to present the visual appearance of controls onto a mock-up of a product. Finger tracking is employed to allow real-time interactions with the controls. This technology can be used to quickly and inexpensively create and evaluate interface prototypes for devices. In the past, evaluating a prototype involved constructing a physical model of the device with working components such as buttons. We have conducted a user study to compare these two methods of prototyping and to validate the use of spatial augmented reality for rapid iterative interface prototyping. Participants of the study were required to press pairs of buttons in sequence and interaction times were measured. The results indicate that while slower, users can interact naturally with projected control panels.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129061543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
Light-weight marker hiding for augmented reality 用于增强现实的轻量级标记隐藏
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643590
Otto Korkalo, M. Aittala, S. Siltanen
In augmented reality, marker-based tracking is the most common method for camera pose estimation. Most of the markers are black and white patterns that are visually obtrusive, but they can be hidden from the video using image inpainting methods. In this paper, we present a computationally efficient approach to achieve this. We use a high-resolution hiding texture, which is captured and generated only once. To capture continuous changes in illumination, reflections and exposure, we also compute a very low-resolution texture at each frame. The coarse and fine textures are combined to obtain a detailed hiding texture which reacts to changing conditions and runs efficiently in mobile phone environments.
在增强现实中,基于标记的跟踪是最常用的相机姿态估计方法。大多数标记是黑白图案,在视觉上很突兀,但它们可以通过图像绘制方法从视频中隐藏起来。在本文中,我们提出了一种计算效率高的方法来实现这一目标。我们使用高分辨率的隐藏纹理,它只被捕获和生成一次。为了捕捉光照、反射和曝光的连续变化,我们还在每帧上计算一个非常低分辨率的纹理。将粗纹理和细纹理结合起来,获得了一种细致的隐藏纹理,该纹理对变化的条件做出反应,并在手机环境中高效运行。
{"title":"Light-weight marker hiding for augmented reality","authors":"Otto Korkalo, M. Aittala, S. Siltanen","doi":"10.1109/ISMAR.2010.5643590","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643590","url":null,"abstract":"In augmented reality, marker-based tracking is the most common method for camera pose estimation. Most of the markers are black and white patterns that are visually obtrusive, but they can be hidden from the video using image inpainting methods. In this paper, we present a computationally efficient approach to achieve this. We use a high-resolution hiding texture, which is captured and generated only once. To capture continuous changes in illumination, reflections and exposure, we also compute a very low-resolution texture at each frame. The coarse and fine textures are combined to obtain a detailed hiding texture which reacts to changing conditions and runs efficiently in mobile phone environments.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128804198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Determining the point of minimum error for 6DOF pose uncertainty representation 确定6DOF姿态不确定度表示的最小误差点
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643548
D. Pustka, J. Willneff, Oliver G. Wenisch, Peter Lukewille, Kurt Achatz, P. Keitler, G. Klinker
In many augmented reality applications, in particular in the medical and industrial domains, knowledge about tracking errors is important. Most current approaches characterize tracking errors by 6×6 covariance matrices that describe the uncertainty of a 6DOF pose, where the center of rotational error lies in the origin of a target coordinate system. This origin is assumed to coincide with the geometric centroid of a tracking target. In this paper, we show that, in case of a multi-camera fiducial tracking system, the geometric centroid of a body does not necessarily coincide with the point of minimum error. The latter is not fixed to a particular location, but moves, depending on the individual observations. We describe how to compute this point of minimum error given a covariance matrix and verify the validity of the approach using Monte Carlo simulations on a number of scenarios. Looking at the movement of the point of minimum error, we find that it can be located surprisingly far away from its expected position. This is further validated by an experiment using a real camera system.
在许多增强现实应用中,特别是在医疗和工业领域,关于跟踪错误的知识很重要。目前大多数方法通过6×6协方差矩阵来描述跟踪误差,该矩阵描述了6DOF姿态的不确定性,其中旋转误差的中心位于目标坐标系的原点。假定这个原点与跟踪目标的几何质心重合。在本文中,我们证明了在多相机基准跟踪系统中,物体的几何质心不一定与最小误差点重合。后者不是固定在一个特定的位置,而是根据个人观察而移动。我们描述了如何在给定协方差矩阵的情况下计算这个最小误差点,并使用蒙特卡罗模拟在许多情况下验证了该方法的有效性。观察最小误差点的运动,我们发现它可以位于离预期位置很远的地方。通过实际摄像机系统的实验进一步验证了这一点。
{"title":"Determining the point of minimum error for 6DOF pose uncertainty representation","authors":"D. Pustka, J. Willneff, Oliver G. Wenisch, Peter Lukewille, Kurt Achatz, P. Keitler, G. Klinker","doi":"10.1109/ISMAR.2010.5643548","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643548","url":null,"abstract":"In many augmented reality applications, in particular in the medical and industrial domains, knowledge about tracking errors is important. Most current approaches characterize tracking errors by 6×6 covariance matrices that describe the uncertainty of a 6DOF pose, where the center of rotational error lies in the origin of a target coordinate system. This origin is assumed to coincide with the geometric centroid of a tracking target. In this paper, we show that, in case of a multi-camera fiducial tracking system, the geometric centroid of a body does not necessarily coincide with the point of minimum error. The latter is not fixed to a particular location, but moves, depending on the individual observations. We describe how to compute this point of minimum error given a covariance matrix and verify the validity of the approach using Monte Carlo simulations on a number of scenarios. Looking at the movement of the point of minimum error, we find that it can be located surprisingly far away from its expected position. This is further validated by an experiment using a real camera system.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131883159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
EXMAR: EXpanded view of mobile augmented reality 移动增强现实的扩展视图
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643584
Sungjae Hwang, Hyungeun Jo, J. Ryu
There have been many studies to minimize the psychological and physical load increase caused by mobile augmented reality systems. In this paper, we propose a new technique called “EXMAR”, which enables the user to explore his/her surroundings with an expanded field of view, resulting in a decrease of physical movement. Through this novel interaction technique, the user can explore off-screen point of interests with environmental contextual information by simple dragging gestures. To evaluate this initial approach, we conducted a proof of concept usability test under a set of scenarios such as “Exploring objects behind the user”, “Avoiding the invasion of personal space” and “Walk and type with front-view.” Through this initial examination, we found that users can explore off-screen point of interests and grasp the spatial relations without the increase of mental effort. We believe that this preliminary study gives a meaningful indication that employing the interactive field of view can be a useful method to decrease the physical load without any additional mental efforts in a mixed and augmented reality environment.
目前已经有很多研究最小化移动增强现实系统带来的心理和身体负荷的增加。在本文中,我们提出了一种名为“EXMAR”的新技术,使用户能够以扩大的视野探索他/她的周围环境,从而减少物理运动。通过这种新颖的交互技术,用户可以通过简单的拖动手势来探索屏幕外的兴趣点和环境上下文信息。为了评估这一初始方法,我们在一系列场景下进行了概念可用性验证测试,如“探索用户身后的物体”、“避免侵犯个人空间”和“走路和打字时使用前视”。通过这个初步的测试,我们发现用户可以在不增加脑力的情况下,探索屏幕外的兴趣点,把握空间关系。我们认为,这项初步研究给出了一个有意义的指示,即在混合和增强现实环境中,采用交互式视野可以是一种有用的方法,可以减少身体负荷,而无需任何额外的精神努力。
{"title":"EXMAR: EXpanded view of mobile augmented reality","authors":"Sungjae Hwang, Hyungeun Jo, J. Ryu","doi":"10.1109/ISMAR.2010.5643584","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643584","url":null,"abstract":"There have been many studies to minimize the psychological and physical load increase caused by mobile augmented reality systems. In this paper, we propose a new technique called “EXMAR”, which enables the user to explore his/her surroundings with an expanded field of view, resulting in a decrease of physical movement. Through this novel interaction technique, the user can explore off-screen point of interests with environmental contextual information by simple dragging gestures. To evaluate this initial approach, we conducted a proof of concept usability test under a set of scenarios such as “Exploring objects behind the user”, “Avoiding the invasion of personal space” and “Walk and type with front-view.” Through this initial examination, we found that users can explore off-screen point of interests and grasp the spatial relations without the increase of mental effort. We believe that this preliminary study gives a meaningful indication that employing the interactive field of view can be a useful method to decrease the physical load without any additional mental efforts in a mixed and augmented reality environment.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134344522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Generating vision based Lego augmented reality training and evaluation systems 生成基于视觉的乐高增强现实训练和评估系统
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643578
T. Engelke, Sabine Webel, N. Gavish
The creation of training applications using Augmented Reality (AR) is still a new field of research. In order to get good training results therefore evaluation should be performed. For the creation of such systems the questions arising are related to the general process of generation, visualization, evaluation and its psychological background. An important aspect of vision based AR is also the robust tracking and initialization of objects for correct augmentation. In this work we present a concept of an entire processing chain, which allows for efficient and automatic generation of such training systems that can also be used for evaluation. We do this in the context of a Lego training system. While explaining the whole process of application generation and usage, we also present a novel approach for robust marker free initialization of colored partly occluded plates and their tracking using one off the shelf monocular camera.
使用增强现实(AR)创建训练应用程序仍然是一个新的研究领域。为了获得良好的培训效果,必须进行评估。对于这种系统的创建,所产生的问题与生成、可视化、评估及其心理背景的一般过程有关。基于视觉的增强现实的一个重要方面是对目标的鲁棒跟踪和初始化,以实现正确的增强。在这项工作中,我们提出了一个完整处理链的概念,它允许有效和自动地生成这种训练系统,也可以用于评估。我们在乐高训练系统的背景下这样做。在解释应用程序生成和使用的整个过程的同时,我们还提出了一种新的方法,用于彩色部分遮挡板的鲁棒无标记初始化和使用现成的单目相机进行跟踪。
{"title":"Generating vision based Lego augmented reality training and evaluation systems","authors":"T. Engelke, Sabine Webel, N. Gavish","doi":"10.1109/ISMAR.2010.5643578","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643578","url":null,"abstract":"The creation of training applications using Augmented Reality (AR) is still a new field of research. In order to get good training results therefore evaluation should be performed. For the creation of such systems the questions arising are related to the general process of generation, visualization, evaluation and its psychological background. An important aspect of vision based AR is also the robust tracking and initialization of objects for correct augmentation. In this work we present a concept of an entire processing chain, which allows for efficient and automatic generation of such training systems that can also be used for evaluation. We do this in the context of a Lego training system. While explaining the whole process of application generation and usage, we also present a novel approach for robust marker free initialization of colored partly occluded plates and their tracking using one off the shelf monocular camera.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114275338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Image-based ghostings for single layer occlusions in augmented reality 增强现实中单层遮挡的基于图像的鬼影
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643546
S. Zollmann, Denis Kalkofen, Erick Méndez, Gerhard Reitmayr
In augmented reality displays, X-Ray visualization techniques make hidden objects visible through combining the physical view with an artificial rendering of the hidden information. An important step in X-Ray visualization is to decide which parts of the physical scene should be kept and which should be replaced by overlays. The combination should provide users with essential perceptual cues to understand the relationship of depth between hidden information and the physical scene. In this paper we present an approach that addresses this decision in unknown environments by analyzing camera images of the physical scene and using the extracted information for occlusion management. Pixels are grouped into perceptually coherent image regions and a set of parameters is determined for each region. The parameters change the X-Ray visualization for either preserving existing structures or generating synthetic structures. Finally, users can customize the overall opacity of foreground regions to adapt the visualization.
在增强现实显示中,x射线可视化技术通过将物理视图与隐藏信息的人工渲染相结合,使隐藏物体可见。x射线可视化的一个重要步骤是决定物理场景的哪些部分应该保留,哪些应该被覆盖层取代。这种组合应该为用户提供必要的感知线索,以理解隐藏信息与物理场景之间的深度关系。在本文中,我们提出了一种在未知环境中通过分析物理场景的相机图像并使用提取的信息进行遮挡管理来解决这一问题的方法。像素被分成感知上连贯的图像区域,并为每个区域确定一组参数。这些参数改变了x射线可视化,要么保留现有结构,要么生成合成结构。最后,用户可以自定义前景区域的整体不透明度,以适应可视化。
{"title":"Image-based ghostings for single layer occlusions in augmented reality","authors":"S. Zollmann, Denis Kalkofen, Erick Méndez, Gerhard Reitmayr","doi":"10.1109/ISMAR.2010.5643546","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643546","url":null,"abstract":"In augmented reality displays, X-Ray visualization techniques make hidden objects visible through combining the physical view with an artificial rendering of the hidden information. An important step in X-Ray visualization is to decide which parts of the physical scene should be kept and which should be replaced by overlays. The combination should provide users with essential perceptual cues to understand the relationship of depth between hidden information and the physical scene. In this paper we present an approach that addresses this decision in unknown environments by analyzing camera images of the physical scene and using the extracted information for occlusion management. Pixels are grouped into perceptually coherent image regions and a set of parameters is determined for each region. The parameters change the X-Ray visualization for either preserving existing structures or generating synthetic structures. Finally, users can customize the overall opacity of foreground regions to adapt the visualization.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124292617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 63
An Augmented Reality X-Ray system based on visual saliency 基于视觉显著性的增强现实x射线系统
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643547
C. Sandor, Andrew Cunningham, Arindam Dey, Ville-Veikko Mattila
In the past, several systems have been presented that enable users to view occluded points of interest using Augmented Reality X-ray visualizations. It is challenging to design a visualization that provides correct occlusions between occluder and occluded objects while maximizing legibility. We have previously published an Augmented Reality X-ray visualization that renders edges of the occluder region over the occluded region to facilitate correct occlusions while providing foreground context. While this approach is simple and works in a wide range of situations, it provides only minimal context of the occluder object.
在过去,已经提出了几个系统,使用户能够使用增强现实x射线可视化来查看被遮挡的兴趣点。设计一个可视化,提供正确的遮挡器和被遮挡对象之间的遮挡,同时最大限度地提高可读性是具有挑战性的。我们之前发布了一个增强现实x射线可视化,在被遮挡的区域上渲染遮挡区域的边缘,以促进正确的遮挡,同时提供前景背景。虽然这种方法很简单,适用于各种情况,但它只提供了遮挡对象的最小上下文。
{"title":"An Augmented Reality X-Ray system based on visual saliency","authors":"C. Sandor, Andrew Cunningham, Arindam Dey, Ville-Veikko Mattila","doi":"10.1109/ISMAR.2010.5643547","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643547","url":null,"abstract":"In the past, several systems have been presented that enable users to view occluded points of interest using Augmented Reality X-ray visualizations. It is challenging to design a visualization that provides correct occlusions between occluder and occluded objects while maximizing legibility. We have previously published an Augmented Reality X-ray visualization that renders edges of the occluder region over the occluded region to facilitate correct occlusions while providing foreground context. While this approach is simple and works in a wide range of situations, it provides only minimal context of the occluder object.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125903401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 94
期刊
2010 IEEE International Symposium on Mixed and Augmented Reality
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1