首页 > 最新文献

International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments最新文献

英文 中文
Widget manipulation revisited: a case study in modeling interactions between experimental conditions 小部件操作重访:实验条件之间建模交互的案例研究
J. Martens, A. Kok, R. V. Liere
Widgets are often used to perform control tasks in three-dimensional (3D) virtual environments (VEs). Spatial interactions through widgets require precise 3D manipulations, and several design aspects of VEs contribute to the ease, accuracy, and speed with which users can perform these interactions. Throughout the years, VE researchers have studied relevant design aspects; for example, the location and size of the widgets, monoscopic versus stereoscopic viewing, the presence or absence of co-location, or the inclusion of (passive) tactile feedback, are all design aspects that have been studied. However, researchers have mostly studied design aspects in isolation and have paid little attention to possible interactions between conditions. In this paper, we introduce a method for modeling interaction effects between experimental conditions and illus- trate it using data from a specific case study, i.e., widget manipulation tasks. More specifically, we model how the effect of passive tactile feedback interacts with stereoscopic viewing for three widget manipulation tasks. We also model how these effects vary between two tasks, i.e., button and menu item selection. Models that include inter- action effects between experimental conditions can be used to get a deeper understanding in the system design trade-offs of a virtual environment.
小部件通常用于在三维(3D)虚拟环境(ve)中执行控制任务。通过小部件进行的空间交互需要精确的3D操作,而ve的几个设计方面有助于用户执行这些交互的便利性、准确性和速度。多年来,VE研究人员对相关设计方面进行了研究;例如,小部件的位置和大小,单视角与立体视角,是否存在共定位,或者是否包含(被动)触觉反馈,这些都是已经研究过的设计方面。然而,研究人员大多是孤立地研究设计方面,很少关注条件之间可能的相互作用。在本文中,我们介绍了一种模拟实验条件之间相互作用效应的方法,并使用来自特定案例研究的数据进行了说明,即小部件操作任务。更具体地说,我们模拟了被动触觉反馈与立体视觉在三个小部件操作任务中的相互作用。我们还模拟了这些效果在两个任务(即按钮和菜单项选择)之间的变化。包括实验条件之间的相互作用效应的模型可以用于在虚拟环境的系统设计权衡中获得更深入的理解。
{"title":"Widget manipulation revisited: a case study in modeling interactions between experimental conditions","authors":"J. Martens, A. Kok, R. V. Liere","doi":"10.2312/EGVE/IPT_EGVE2007/053-060","DOIUrl":"https://doi.org/10.2312/EGVE/IPT_EGVE2007/053-060","url":null,"abstract":"Widgets are often used to perform control tasks in three-dimensional (3D) virtual environments (VEs). Spatial interactions through widgets require precise 3D manipulations, and several design aspects of VEs contribute to the ease, accuracy, and speed with which users can perform these interactions. Throughout the years, VE researchers have studied relevant design aspects; for example, the location and size of the widgets, monoscopic versus stereoscopic viewing, the presence or absence of co-location, or the inclusion of (passive) tactile feedback, are all design aspects that have been studied. However, researchers have mostly studied design aspects in isolation and have paid little attention to possible interactions between conditions.\u0000 In this paper, we introduce a method for modeling interaction effects between experimental conditions and illus- trate it using data from a specific case study, i.e., widget manipulation tasks. More specifically, we model how the effect of passive tactile feedback interacts with stereoscopic viewing for three widget manipulation tasks. We also model how these effects vary between two tasks, i.e., button and menu item selection. Models that include inter- action effects between experimental conditions can be used to get a deeper understanding in the system design trade-offs of a virtual environment.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114639561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A comparison of tracking- and controller-based input for complex bimanual interaction in virtual environments 虚拟环境中基于跟踪和控制器的复杂人机交互输入的比较
André Kunert, Alexander Kulik, A. Huckauf, B. Fröhlich
We describe a user study comparing a two-handed controller-based input device to a two-handed tracking solution, both offering the control space of six degrees of freedom to each hand. For benchmarking the different input modalities we implemented a set of evaluation tasks requiring viewpoint navigation, selection and object manipulation in a maze-like virtual environment. The results of the study reveal similar overall performance for both input modalities for compound tasks. However significant differences with respect to the involved subtasks were found. Furthermore we can show that the integral attributes of a subtask do not necessarily need to be manipulated by a single hand. Instead, the simultaneously required degrees of freedom for operating integrally perceived subtasks may also be distributed to both hands for better control.
我们描述了一项用户研究,比较了基于双手控制器的输入设备和双手跟踪解决方案,两者都为每只手提供了六个自由度的控制空间。为了对不同的输入模式进行基准测试,我们实现了一组评估任务,需要在迷宫般的虚拟环境中进行视点导航、选择和对象操作。研究结果显示,两种输入方式在复合任务中的总体表现相似。然而,在涉及的子任务方面发现了显著差异。此外,我们可以证明,子任务的积分属性不一定需要由单手操作。相反,同时操作整体感知子任务所需的自由度也可以分配给两只手,以便更好地控制。
{"title":"A comparison of tracking- and controller-based input for complex bimanual interaction in virtual environments","authors":"André Kunert, Alexander Kulik, A. Huckauf, B. Fröhlich","doi":"10.2312/EGVE/IPT_EGVE2007/043-052","DOIUrl":"https://doi.org/10.2312/EGVE/IPT_EGVE2007/043-052","url":null,"abstract":"We describe a user study comparing a two-handed controller-based input device to a two-handed tracking solution, both offering the control space of six degrees of freedom to each hand. For benchmarking the different input modalities we implemented a set of evaluation tasks requiring viewpoint navigation, selection and object manipulation in a maze-like virtual environment. The results of the study reveal similar overall performance for both input modalities for compound tasks. However significant differences with respect to the involved subtasks were found. Furthermore we can show that the integral attributes of a subtask do not necessarily need to be manipulated by a single hand. Instead, the simultaneously required degrees of freedom for operating integrally perceived subtasks may also be distributed to both hands for better control.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134531920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Three extensions to subtractive crosstalk reduction 减法串扰消减的三个扩展
F. Smit, R. V. Liere, B. Fröhlich
Stereo displays suffer from crosstalk, an effect that reduces or even inhibits the viewer's ability to correctly fuse stereoscopic images. In this paper, three extensions for improved software crosstalk reduction are introduced. First, we propose a reduction method operating in CIELAB color space to find a perceptually better color match for crosstalk corrected pixels. Second, we introduce a geometry-based reduction method that operates on fused 3D pixels. Finally, a run-time optimization is introduced that avoids the need to process each pixel. We evaluated our CIELAB-based method using the Visible Differences Predictor (VDP). Our results show that we are able to significantly improve crosstalk reduction compared to previously used methods that operate in RGB color space. The combination of our methods provides an improved, real-time software crosstalk reduction framework, applicable to a wider range of scenes, delivering better quality, higher performance, and more flexibility.
立体显示器受相声的影响,这种效果会降低甚至抑制观看者正确融合立体图像的能力。本文介绍了三种改进软件串扰抑制的扩展方法。首先,我们提出了一种在CIELAB色彩空间中操作的约简方法,以便为串扰校正像素找到感知上更好的颜色匹配。其次,我们引入了一种基于几何的约简方法,该方法对融合的3D像素进行操作。最后,引入了一种运行时优化,避免了对每个像素进行处理的需要。我们使用可见差异预测器(VDP)评估基于cielab的方法。我们的结果表明,与之前在RGB色彩空间中使用的方法相比,我们能够显着提高串扰减少。我们的方法的结合提供了一个改进的,实时的软件串扰减少框架,适用于更广泛的场景,提供更好的质量,更高的性能和更大的灵活性。
{"title":"Three extensions to subtractive crosstalk reduction","authors":"F. Smit, R. V. Liere, B. Fröhlich","doi":"10.2312/EGVE/IPT_EGVE2007/085-092","DOIUrl":"https://doi.org/10.2312/EGVE/IPT_EGVE2007/085-092","url":null,"abstract":"Stereo displays suffer from crosstalk, an effect that reduces or even inhibits the viewer's ability to correctly fuse stereoscopic images. In this paper, three extensions for improved software crosstalk reduction are introduced. First, we propose a reduction method operating in CIELAB color space to find a perceptually better color match for crosstalk corrected pixels. Second, we introduce a geometry-based reduction method that operates on fused 3D pixels. Finally, a run-time optimization is introduced that avoids the need to process each pixel. We evaluated our CIELAB-based method using the Visible Differences Predictor (VDP). Our results show that we are able to significantly improve crosstalk reduction compared to previously used methods that operate in RGB color space. The combination of our methods provides an improved, real-time software crosstalk reduction framework, applicable to a wider range of scenes, delivering better quality, higher performance, and more flexibility.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124066762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Wearable mixed reality system In less than 1 pound
A. Peternier, F. Vexo, D. Thalmann
We have designed a wearable Mixed Reality (MR) framework which allows to real-time render game-like 3D scenes on see-through head-mounted displays (see through HMDs) and to localize the user position within a known internet wireless area. Our equipment weights less than 1 Pound (0.45 Kilos). The information visualized on the mobile device could be sent on-demand from a remote server and realtime rendered onboard.We present our PDA-based platform as a valid alternative to use in wearable MR contexts under less mobility and encumbering constraints: our approach eliminates the typical backpack with a laptop, a GPS antenna and a heavy HMD usually required in this cases. A discussion about our results and user experiences with our approach using a handheld for 3D rendering is presented as well.
我们设计了一个可穿戴的混合现实(MR)框架,它允许在透明头戴式显示器上实时渲染游戏般的3D场景,并在已知的互联网无线区域内定位用户位置。我们的设备重量不到1磅(0.45公斤)。在移动设备上可视化的信息可以从远程服务器按需发送,并在船上实时呈现。我们将基于pda的平台作为一种有效的替代方案,在移动性和累赘限制较少的可穿戴MR环境中使用:我们的方法消除了在这种情况下通常需要的带有笔记本电脑、GPS天线和重型HMD的典型背包。讨论了我们的结果和用户体验,我们的方法使用手持3D渲染也提出了。
{"title":"Wearable mixed reality system In less than 1 pound","authors":"A. Peternier, F. Vexo, D. Thalmann","doi":"10.2312/EGVE/EGVE06/035-044","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/035-044","url":null,"abstract":"We have designed a wearable Mixed Reality (MR) framework which allows to real-time render game-like 3D scenes on see-through head-mounted displays (see through HMDs) and to localize the user position within a known internet wireless area. Our equipment weights less than 1 Pound (0.45 Kilos). The information visualized on the mobile device could be sent on-demand from a remote server and realtime rendered onboard.We present our PDA-based platform as a valid alternative to use in wearable MR contexts under less mobility and encumbering constraints: our approach eliminates the typical backpack with a laptop, a GPS antenna and a heavy HMD usually required in this cases. A discussion about our results and user experiences with our approach using a handheld for 3D rendering is presented as well.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129385750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Managing missed interactions in distributed virtual environments 管理分布式虚拟环境中错过的交互
S. Parkin, Péter András, G. Morgan
A scalable distributed virtual environment (DVE) may be achieved by ensuring virtual world objects communicate their actions to only those objects that fall within their influence, reducing the need to send and process unnecessary messages. A missed interaction may be defined as a failure to exchange messages to appropriately model object interaction. A number of parameters under the control of a DVE developer may influence the possibility of missed interactions occurring (e.g., object velocities, area of influence). However, due to the complexities associated with object movement and the deployment environment (e.g., non-deterministic object movement, network latency), identifying the value for such parameters to minimise missed interactions while maintaining scalability (minimal message passing) is not clear. We present in this paper a tool which simulates a DVE and provides developers with an indication of the appropriate values for parameters when balancing missed interactions against scalability.
可扩展的分布式虚拟环境(DVE)可以通过确保虚拟世界对象仅向其影响范围内的对象传递其操作来实现,从而减少发送和处理不必要消息的需要。错过的交互可以定义为交换消息以适当地建模对象交互的失败。DVE开发人员控制的许多参数可能会影响错过相互作用发生的可能性(例如,物体速度、影响范围)。然而,由于与对象移动和部署环境相关的复杂性(例如,不确定性对象移动、网络延迟),在保持可伸缩性(最小化消息传递)的同时,确定这些参数的值以最小化错过的交互并不清楚。在本文中,我们提供了一个工具,它可以模拟DVE,并为开发人员提供在平衡错过的交互与可伸缩性时参数的适当值的指示。
{"title":"Managing missed interactions in distributed virtual environments","authors":"S. Parkin, Péter András, G. Morgan","doi":"10.2312/EGVE/EGVE06/027-034","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/027-034","url":null,"abstract":"A scalable distributed virtual environment (DVE) may be achieved by ensuring virtual world objects communicate their actions to only those objects that fall within their influence, reducing the need to send and process unnecessary messages. A missed interaction may be defined as a failure to exchange messages to appropriately model object interaction. A number of parameters under the control of a DVE developer may influence the possibility of missed interactions occurring (e.g., object velocities, area of influence). However, due to the complexities associated with object movement and the deployment environment (e.g., non-deterministic object movement, network latency), identifying the value for such parameters to minimise missed interactions while maintaining scalability (minimal message passing) is not clear. We present in this paper a tool which simulates a DVE and provides developers with an indication of the appropriate values for parameters when balancing missed interactions against scalability.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132918694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Measuring the discernability of virtual objects in conventional and stylized augmented reality 测量传统和程式化增强现实中虚拟物体的可识别性
J. Fischer, D. Cunningham, D. Bartz, C. Wallraven, H. Bülthoff, W. Straßer
In augmented reality, virtual graphical objects are overlaid over the real environment of the observer. Conventional augmented reality systems normally use standard real-time rendering methods for generating the graphical representations of virtual objects. These renderings contain the typical artifacts of computer generated graphics, e.g., aliasing caused by the rasterization process and unrealistic, manually configured illumination models. Due to these artifacts, virtual objects look artifical and can easily be distinguished from the real environment. A different approach to generating augmented reality images is the basis of stylized augmented reality [FBS05c]. Here, similar types of artistic or illustrative stylization are applied to the virtual objects and the camera image of the real enviroment. Therefore, real and virtual image elements look significantly more similar and are less distinguishable from each other. In this paper, we present the results of a psychophysical study on the effectiveness of stylized augmented reality. In this study, a number of participants were asked to decide whether objects shown in images of augmented reality scenes are virtual or real. Conventionally rendered as well as stylized augmented reality images and short video clips were presented to the participants. The correctness of the participants' responses and their reaction times were recorded. The results of our study show that an equalized level of realism is achieved by using stylized augmented reality, i.e., that it is significantly more difficult to distinguish virtual objects from real objects.
在增强现实中,虚拟图形对象覆盖在观察者的真实环境上。传统的增强现实系统通常使用标准的实时渲染方法来生成虚拟对象的图形表示。这些渲染图包含计算机生成图形的典型工件,例如,由光栅化过程和不现实的手动配置照明模型引起的混叠。由于这些工件,虚拟对象看起来是人造的,可以很容易地与真实环境区分开来。生成增强现实图像的不同方法是程式化增强现实的基础[FBS05c]。在这里,类似类型的艺术或说明性风格化被应用于虚拟对象和真实环境的相机图像。因此,真实图像和虚拟图像元素看起来更加相似,彼此之间的区别更小。在本文中,我们提出了一项关于程式化增强现实有效性的心理物理学研究结果。在这项研究中,许多参与者被要求判断增强现实场景图像中显示的物体是虚拟的还是真实的。向参与者展示了传统渲染和风格化的增强现实图像和短视频剪辑。记录参与者的回答的正确性和他们的反应时间。我们的研究结果表明,通过使用风格化的增强现实实现了均衡的真实感水平,即,将虚拟物体与真实物体区分开来的难度大大增加。
{"title":"Measuring the discernability of virtual objects in conventional and stylized augmented reality","authors":"J. Fischer, D. Cunningham, D. Bartz, C. Wallraven, H. Bülthoff, W. Straßer","doi":"10.2312/EGVE/EGVE06/053-061","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/053-061","url":null,"abstract":"In augmented reality, virtual graphical objects are overlaid over the real environment of the observer. Conventional augmented reality systems normally use standard real-time rendering methods for generating the graphical representations of virtual objects. These renderings contain the typical artifacts of computer generated graphics, e.g., aliasing caused by the rasterization process and unrealistic, manually configured illumination models. Due to these artifacts, virtual objects look artifical and can easily be distinguished from the real environment.\u0000 A different approach to generating augmented reality images is the basis of stylized augmented reality [FBS05c]. Here, similar types of artistic or illustrative stylization are applied to the virtual objects and the camera image of the real enviroment. Therefore, real and virtual image elements look significantly more similar and are less distinguishable from each other.\u0000 In this paper, we present the results of a psychophysical study on the effectiveness of stylized augmented reality. In this study, a number of participants were asked to decide whether objects shown in images of augmented reality scenes are virtual or real. Conventionally rendered as well as stylized augmented reality images and short video clips were presented to the participants. The correctness of the participants' responses and their reaction times were recorded. The results of our study show that an equalized level of realism is achieved by using stylized augmented reality, i.e., that it is significantly more difficult to distinguish virtual objects from real objects.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123107080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
A multi modal table-top 3D modeling tool in augmented environments 增强环境中的多模态桌面3D建模工具
Thomas Novotny, I. Lindt, Wolfgang Broll
Even with today's highly sophisticated 3D modeling programs, creating, assembling and adapting 3D models is still a big challenge for inexperienced users. In this paper we present our approach of an intuitive table-top 3D modeling tool in Augmented Reality. It allows the author to view 3D virtual objects within his natural working environment, to manipulate them and to create new 3D elements easily. The offered interaction techniques support the author's activity by a combination of tangible user interfaces with voice recognition, a gaze-based view pointer and 3D widgets as components of a multi modal user interface. Within the scope of this work, intuitive interaction techniques were realized to offer the participants an easy way of working within an augmented environment. User tests were performed to compare our approach to a WIMP-based desktop application and to an alternative AR modeling application.
即使有今天的高度复杂的3D建模程序,创建,组装和调整3D模型仍然是一个巨大的挑战,没有经验的用户。在本文中,我们提出了一种直观的桌面3D建模工具在增强现实中的方法。它允许作者在他的自然工作环境中查看3D虚拟对象,操纵它们并轻松创建新的3D元素。提供的交互技术通过将有形用户界面与语音识别、基于凝视的视图指针和作为多模态用户界面组件的3D小部件相结合来支持作者的活动。在这项工作的范围内,实现了直观的交互技术,为参与者提供了一种在增强环境中工作的简单方法。进行了用户测试,将我们的方法与基于wimp的桌面应用程序和替代AR建模应用程序进行比较。
{"title":"A multi modal table-top 3D modeling tool in augmented environments","authors":"Thomas Novotny, I. Lindt, Wolfgang Broll","doi":"10.2312/EGVE/EGVE06/045-052","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/045-052","url":null,"abstract":"Even with today's highly sophisticated 3D modeling programs, creating, assembling and adapting 3D models is still a big challenge for inexperienced users. In this paper we present our approach of an intuitive table-top 3D modeling tool in Augmented Reality. It allows the author to view 3D virtual objects within his natural working environment, to manipulate them and to create new 3D elements easily. The offered interaction techniques support the author's activity by a combination of tangible user interfaces with voice recognition, a gaze-based view pointer and 3D widgets as components of a multi modal user interface. Within the scope of this work, intuitive interaction techniques were realized to offer the participants an easy way of working within an augmented environment. User tests were performed to compare our approach to a WIMP-based desktop application and to an alternative AR modeling application.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124444796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Model-based hybrid tracking for medical augmented reality 基于模型的医疗增强现实混合跟踪
J. Fischer, Michael Eichler, D. Bartz, W. Straßer
Camera pose estimation is one of the most important, but also one of the most challenging tasks in augmented reality. Without a highly accurate estimation of the position and orientation of the digital video camera, it is impossible to render a spatially correct overlay of graphical information. This requirement is even more crucial in medical applications, where the virtual objects are supposed to be correctly aligned with the patient. Many medical AR systems use specialized tracking devices, which can be of limited suitability for real-world scenarios. We have developed an AR framework for surgical applications based on existing medical equipment. A surgical navigation device delivers tracking information measured by a built-in infrared camera system, which is the basis for the pose estimation of the AR video camera. However, depending on the conditions in the environment, this infrared pose data can contain discernible tracking errors. One main drawback of the medical tracking device is the fact that, while it delivers a very high positional accuracy, the reported camera orientation can contain a relatively large error. In this paper, we present a hybrid tracking scheme for medical augmented reality based on a certified medical tracking system. The final pose estimation takes the inital infrared tracking data as well as salient features in the camera image into account. The vision-based component of the tracking algorithm relies on a pre-defined graphical model of the observed scene. The infrared and vision-based tracking data are tightly integrated into a unified pose estimation algorithm. This algorithm is based on an iterative numerical optimization method. We describe an implementation of the algorithm and present experimental data showing that our new method is capable of delivering a more accurate pose estimation.
相机姿态估计是增强现实中最重要,也是最具挑战性的任务之一。如果没有对数字摄像机的位置和方向进行高度精确的估计,就不可能呈现出空间上正确的图形信息叠加。这一要求在医疗应用中更为重要,因为虚拟对象应该与患者正确对齐。许多医疗AR系统使用专门的跟踪设备,这些设备对现实世界场景的适用性有限。我们已经开发了一个基于现有医疗设备的手术应用增强现实框架。手术导航设备提供由内置红外摄像系统测量的跟踪信息,这是AR摄像机姿态估计的基础。然而,根据环境条件,这种红外姿态数据可能包含可识别的跟踪误差。医疗跟踪设备的一个主要缺点是,虽然它提供了非常高的位置精度,但报告的相机方向可能包含相对较大的误差。本文提出了一种基于认证医疗跟踪系统的混合医疗增强现实跟踪方案。最后的姿态估计考虑了初始红外跟踪数据以及相机图像中的显著特征。跟踪算法的基于视觉的组件依赖于观察场景的预定义图形模型。红外和基于视觉的跟踪数据紧密集成到统一的姿态估计算法中。该算法基于迭代数值优化方法。我们描述了该算法的实现,并提供了实验数据,表明我们的新方法能够提供更准确的姿态估计。
{"title":"Model-based hybrid tracking for medical augmented reality","authors":"J. Fischer, Michael Eichler, D. Bartz, W. Straßer","doi":"10.2312/EGVE/EGVE06/071-080","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/071-080","url":null,"abstract":"Camera pose estimation is one of the most important, but also one of the most challenging tasks in augmented reality. Without a highly accurate estimation of the position and orientation of the digital video camera, it is impossible to render a spatially correct overlay of graphical information. This requirement is even more crucial in medical applications, where the virtual objects are supposed to be correctly aligned with the patient. Many medical AR systems use specialized tracking devices, which can be of limited suitability for real-world scenarios. We have developed an AR framework for surgical applications based on existing medical equipment. A surgical navigation device delivers tracking information measured by a built-in infrared camera system, which is the basis for the pose estimation of the AR video camera. However, depending on the conditions in the environment, this infrared pose data can contain discernible tracking errors. One main drawback of the medical tracking device is the fact that, while it delivers a very high positional accuracy, the reported camera orientation can contain a relatively large error.\u0000 In this paper, we present a hybrid tracking scheme for medical augmented reality based on a certified medical tracking system. The final pose estimation takes the inital infrared tracking data as well as salient features in the camera image into account. The vision-based component of the tracking algorithm relies on a pre-defined graphical model of the observed scene. The infrared and vision-based tracking data are tightly integrated into a unified pose estimation algorithm. This algorithm is based on an iterative numerical optimization method. We describe an implementation of the algorithm and present experimental data showing that our new method is capable of delivering a more accurate pose estimation.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133560010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A model for the expected running time of collision detection using AABBs trees 基于AABBs树的碰撞检测预期运行时间模型
René Weller, Jan Klein, G. Zachmann
In this paper, we propose a model to estimate the expected running time of hierarchical collision detection that utilizes AABB trees, which are a frequently used type of bounding volume (BV). We show that the average running time for the simultaneous traversal of two binary AABB trees depends on two characteristic parameters: the overlap of the root BVs and the BV diminishing factor within the hierarchies. With this model, we show that the average running time is in O(n) or even in O(logn) for realistic cases. Finally, we present some experiments that confirm our theoretical considerations. We believe that our results are interesting not only from a theoretical point of view, but also for practical applications, e. g., in time-critical collision detection scenarios where our running time prediction could help to make the best use of CPU time available.
在本文中,我们提出了一个模型来估计预期运行时间的分层碰撞检测利用AABB树,这是一种常用的边界体积(BV)。我们证明了同时遍历两棵二叉AABB树的平均运行时间取决于两个特征参数:根BV的重叠和层次内BV的递减因子。使用这个模型,我们证明了在实际情况下,平均运行时间为O(n),甚至为O(logn)。最后,我们提出了一些实验来证实我们的理论考虑。我们相信我们的结果不仅从理论的角度来看是有趣的,而且对于实际应用来说也是有趣的,例如,在时间关键的碰撞检测场景中,我们的运行时间预测可以帮助最好地利用CPU时间。
{"title":"A model for the expected running time of collision detection using AABBs trees","authors":"René Weller, Jan Klein, G. Zachmann","doi":"10.2312/EGVE/EGVE06/011-017","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/011-017","url":null,"abstract":"In this paper, we propose a model to estimate the expected running time of hierarchical collision detection that utilizes AABB trees, which are a frequently used type of bounding volume (BV).\u0000 We show that the average running time for the simultaneous traversal of two binary AABB trees depends on two characteristic parameters: the overlap of the root BVs and the BV diminishing factor within the hierarchies. With this model, we show that the average running time is in O(n) or even in O(logn) for realistic cases. Finally, we present some experiments that confirm our theoretical considerations.\u0000 We believe that our results are interesting not only from a theoretical point of view, but also for practical applications, e. g., in time-critical collision detection scenarios where our running time prediction could help to make the best use of CPU time available.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125162799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Camera setup optimization for optical tracking in virtual environments 虚拟环境中光学跟踪的相机设置优化
Philippe Cerfontaine, M. Schirski, Daniel Bündgens, T. Kuhlen
In this paper we present a method for finding the optimal camera alignment for a tracking system with multiple cameras, by specifying the volume that should be tracked and an initial camera setup. The approach we use is twofold: on the one hand, we use a rather simple gradient based steepest descent method and on the other hand, we also implement a simulated annealing algorithm that features guaranteed optimality assertions. Both approaches are fully automatic and take advantage of modern graphics hardware since we implemented a GPU-based accelerated visibility test. The proposed algorithms can automatically optimize the whole camera setup by adjusting the given set of parameters. The optimization may have different goals depending on the desired application, e.g. one may wish to optimize towards the widest possible coverage of the specified volume, while others would prefer to maximize the number of cameras seeing a certain area to overcome heavy occlusion problems during the tracking process. Our approach also considers parameter constraints that the user may specify according to the local environment where the cameras have to be set up. This makes it possible to simply formulate higher level constraints e.g. all cameras have a vertical up vector. It individually adapts the optimization to the given situation and also asserts the feasibility of the algorithm's output.
在本文中,我们提出了一种方法,通过指定应跟踪的体积和初始相机设置,为具有多个相机的跟踪系统找到最佳相机对准。我们使用的方法是双重的:一方面,我们使用一个相当简单的基于梯度的最陡下降方法;另一方面,我们还实现了一个模拟退火算法,该算法具有保证最优性的断言。这两种方法都是全自动的,并且利用了现代图形硬件,因为我们实现了基于gpu的加速可见性测试。提出的算法可以通过调整给定的一组参数来自动优化整个摄像机的设置。根据所需的应用程序,优化可能有不同的目标,例如,有人可能希望优化到指定体积的尽可能广泛的覆盖范围,而其他人则希望最大化看到特定区域的摄像机数量,以克服跟踪过程中的严重遮挡问题。我们的方法还考虑了用户可以根据必须设置摄像机的本地环境指定的参数约束。这使得简单地制定更高级别的约束成为可能,例如,所有相机都有一个垂直向上的矢量。它可以根据给定的情况单独调整优化,并保证算法输出的可行性。
{"title":"Camera setup optimization for optical tracking in virtual environments","authors":"Philippe Cerfontaine, M. Schirski, Daniel Bündgens, T. Kuhlen","doi":"10.2312/EGVE/EGVE06/081-088","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/081-088","url":null,"abstract":"In this paper we present a method for finding the optimal camera alignment for a tracking system with multiple cameras, by specifying the volume that should be tracked and an initial camera setup. The approach we use is twofold: on the one hand, we use a rather simple gradient based steepest descent method and on the other hand, we also implement a simulated annealing algorithm that features guaranteed optimality assertions. Both approaches are fully automatic and take advantage of modern graphics hardware since we implemented a GPU-based accelerated visibility test. The proposed algorithms can automatically optimize the whole camera setup by adjusting the given set of parameters. The optimization may have different goals depending on the desired application, e.g. one may wish to optimize towards the widest possible coverage of the specified volume, while others would prefer to maximize the number of cameras seeing a certain area to overcome heavy occlusion problems during the tracking process. Our approach also considers parameter constraints that the user may specify according to the local environment where the cameras have to be set up. This makes it possible to simply formulate higher level constraints e.g. all cameras have a vertical up vector. It individually adapts the optimization to the given situation and also asserts the feasibility of the algorithm's output.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127173151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1