首页 > 最新文献

International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments最新文献

英文 中文
Hybrid interfaces in VEs: intent and interaction VEs中的混合接口:意图和交互
G. D. Haan, E. J. Griffith, M. Koutek, F. Post
Hybrid user interfaces (UIs) integrate well-known 2D user interface elements into the 3D virtual environment, and provide a familiar and portable interface across a variety of VR systems. However, their usability is often reduced by accuracy and speed, caused by inaccuracies in tracking and a lack of constraints and feedback. To ease these difficulties often large widgets and bulky interface elements must be used, which, at the same time, limit the size of the 3D workspace and restrict the space where other supplemental 2D information can be displayed. In this paper, we present two developments addressing this problem: supportive user interaction and a new implementation of a hybrid interface. First, we describe a small set of tightly integrated 2D windows we developed with the goal of providing increased flexibility in the UI and reducing UI clutter. Next we present extensions to our supportive selection technique, IntenSelect. To better cope with a variety of VR and UI tasks, we extended the selection assistance technique to include direct selection, spring-based manipulation, and specialized snapping behavior. Finally, we relate how the effective integration of these two developments eases some of the UI restrictions and produces a more comfortable VR experience.
混合用户界面(ui)将众所周知的2D用户界面元素集成到3D虚拟环境中,并在各种VR系统中提供熟悉的便携式界面。然而,它们的可用性往往被准确性和速度所降低,这是由于跟踪不准确以及缺乏约束和反馈造成的。为了缓解这些困难,通常必须使用大型部件和笨重的界面元素,这同时限制了3D工作空间的大小,并限制了可以显示其他补充2D信息的空间。在本文中,我们提出了解决这个问题的两个发展:支持用户交互和混合界面的新实现。首先,我们描述了一组紧密集成的2D窗口,我们开发的目标是增加UI的灵活性,减少UI的混乱。接下来,我们将介绍对我们的支持性选择技术intselect的扩展。为了更好地应对各种VR和UI任务,我们扩展了选择辅助技术,包括直接选择、基于弹簧的操作和专门的捕捉行为。最后,我们将介绍这两个开发的有效集成如何缓解一些UI限制并产生更舒适的VR体验。
{"title":"Hybrid interfaces in VEs: intent and interaction","authors":"G. D. Haan, E. J. Griffith, M. Koutek, F. Post","doi":"10.2312/EGVE/EGVE06/109-118","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/109-118","url":null,"abstract":"Hybrid user interfaces (UIs) integrate well-known 2D user interface elements into the 3D virtual environment, and provide a familiar and portable interface across a variety of VR systems. However, their usability is often reduced by accuracy and speed, caused by inaccuracies in tracking and a lack of constraints and feedback. To ease these difficulties often large widgets and bulky interface elements must be used, which, at the same time, limit the size of the 3D workspace and restrict the space where other supplemental 2D information can be displayed. In this paper, we present two developments addressing this problem: supportive user interaction and a new implementation of a hybrid interface. First, we describe a small set of tightly integrated 2D windows we developed with the goal of providing increased flexibility in the UI and reducing UI clutter. Next we present extensions to our supportive selection technique, IntenSelect. To better cope with a variety of VR and UI tasks, we extended the selection assistance technique to include direct selection, spring-based manipulation, and specialized snapping behavior. Finally, we relate how the effective integration of these two developments eases some of the UI restrictions and produces a more comfortable VR experience.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115499165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Fast continuous collision detection among deformable models using graphics processors 使用图形处理器的可变形模型之间的快速连续碰撞检测
N. Govindaraju, I. Kabul, M. Lin, Dinesh Manocha
We present an interactive algorithm to perform continuous collision detection between general deformable models using graphics processors (GPUs). We model the motion of each object in the environment as a continuous path and check for collisions along the paths. Our algorithm precomputes the chromatic decomposition for each object and uses visibility queries on GPUs to quickly compute potentially colliding sets of primitives. We introduce a primitive classification technique to perform efficient continuous self-collision. We have implemented our algorithm on a 3:0 GHz Pentium IV PC with a NVIDIA 7800 GPU, and we highlight its performance on complex simulations composed of several thousands of triangles. In practice, our algorithm is able to detect all contacts, including self-collisions, at image-space precision in tens of milli-seconds.
我们提出了一种交互式算法来执行通用可变形模型之间的连续碰撞检测使用图形处理器(gpu)。我们将环境中每个物体的运动建模为连续路径,并检查路径上的碰撞。我们的算法预先计算每个对象的颜色分解,并使用gpu上的可见性查询来快速计算可能发生冲突的原语集。我们引入了一种原始分类技术来实现高效的连续自碰撞。我们已经在带有NVIDIA 7800 GPU的3 GHz Pentium IV PC上实现了我们的算法,并在由数千个三角形组成的复杂模拟中突出了它的性能。在实践中,我们的算法能够在几十毫秒内以图像空间精度检测所有接触,包括自碰撞。
{"title":"Fast continuous collision detection among deformable models using graphics processors","authors":"N. Govindaraju, I. Kabul, M. Lin, Dinesh Manocha","doi":"10.2312/EGVE/EGVE06/019-026","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/019-026","url":null,"abstract":"We present an interactive algorithm to perform continuous collision detection between general deformable models using graphics processors (GPUs). We model the motion of each object in the environment as a continuous path and check for collisions along the paths. Our algorithm precomputes the chromatic decomposition for each object and uses visibility queries on GPUs to quickly compute potentially colliding sets of primitives. We introduce a primitive classification technique to perform efficient continuous self-collision. We have implemented our algorithm on a 3:0 GHz Pentium IV PC with a NVIDIA 7800 GPU, and we highlight its performance on complex simulations composed of several thousands of triangles. In practice, our algorithm is able to detect all contacts, including self-collisions, at image-space precision in tens of milli-seconds.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130677951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
A new view management method for wearable augmented reality systems: emphasizing the user-viewed object and the corresponding annotation 一种新的可穿戴增强现实系统视图管理方法:强调用户查看的对象和相应的注释
Ryuhei Tenmoku, M. Kanbara, N. Yokoya
This paper describes a new view management method for annotation overlay using augmented reality(AR) systems. The proposed method emphasizes the user-viewed object and the corresponding annotation in order to present links between annotations and real objects clearly. This method includes two kinds of techniques for emphasizing the user-viewed object and the annotation. First, the proposed method highlights the object which is gazed at by the user using a 3D model without textures. Secondly, when the user-viewed object is occluded by other objects, the object is complemented by using an image made from a detailed 3D model with textures. This paper also describes experiments which show the feasibility of the proposed method by using a prototype wearable AR system.
本文提出了一种基于增强现实(AR)系统的标注叠加视图管理新方法。该方法强调用户查看的对象和相应的注释,以便清晰地呈现注释与真实对象之间的联系。该方法包括两种强调用户查看对象和注释的技术。首先,该方法利用无纹理的三维模型突出用户注视的物体;其次,当用户观看的物体被其他物体遮挡时,使用由带有纹理的详细3D模型制作的图像来补充该物体。本文还通过一个可穿戴AR系统的原型进行了实验,验证了该方法的可行性。
{"title":"A new view management method for wearable augmented reality systems: emphasizing the user-viewed object and the corresponding annotation","authors":"Ryuhei Tenmoku, M. Kanbara, N. Yokoya","doi":"10.2312/EGVE/EGVE06/127-134","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/127-134","url":null,"abstract":"This paper describes a new view management method for annotation overlay using augmented reality(AR) systems. The proposed method emphasizes the user-viewed object and the corresponding annotation in order to present links between annotations and real objects clearly. This method includes two kinds of techniques for emphasizing the user-viewed object and the annotation. First, the proposed method highlights the object which is gazed at by the user using a 3D model without textures. Secondly, when the user-viewed object is occluded by other objects, the object is complemented by using an image made from a detailed 3D model with textures. This paper also describes experiments which show the feasibility of the proposed method by using a prototype wearable AR system.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124804875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
GA based adaptive sampling for image-based walkthrough 基于遗传算法的图像演练自适应采样
Dong Hoon Lee, Jong Ryul Kim, Soon Ki Jung
This paper presents an adaptive sampling method for image-based walkthrough. Our goal is to select minimal sets from the initially dense sampled data set, while guaranteeing a visual correct view from any position in any direction in walkthrough space. For this purpose we formulate the covered region for sampling criteria and then regard the sampling problem as a set covering problem. We estimate the optimal set using Genetic algorithm, and show the efficiency of the proposed method with several experiments.
提出了一种基于图像演练的自适应采样方法。我们的目标是从最初密集的采样数据集中选择最小集,同时保证在漫游空间中从任何位置和任何方向的视觉正确视图。为此,我们制定了采样准则的覆盖区域,并将采样问题看作是一个集合覆盖问题。利用遗传算法估计最优集,并通过实验验证了该方法的有效性。
{"title":"GA based adaptive sampling for image-based walkthrough","authors":"Dong Hoon Lee, Jong Ryul Kim, Soon Ki Jung","doi":"10.2312/EGVE/EGVE06/135-142","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/135-142","url":null,"abstract":"This paper presents an adaptive sampling method for image-based walkthrough. Our goal is to select minimal sets from the initially dense sampled data set, while guaranteeing a visual correct view from any position in any direction in walkthrough space. For this purpose we formulate the covered region for sampling criteria and then regard the sampling problem as a set covering problem. We estimate the optimal set using Genetic algorithm, and show the efficiency of the proposed method with several experiments.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124468363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GraphTracker: a topology projection invariant optical tracker GraphTracker:一个拓扑投影不变光学跟踪器
F. Smit, A. V. Rhijn, R. V. Liere
In this paper, we describe a new optical tracking algorithm for pose estimation of interaction devices in virtual and augmented reality. Given a 3D model of the interaction device and a number of camera images, the primary difficulty in pose reconstruction is to find the correspondence between 2D image points and 3D model points. Most previous methods solved this problem by the use of stereo correspondence. Once the correspondence problem has been solved, the pose can be estimated by determining the transformation between the 3D point cloud and the model. Our approach is based on the projective invariant topology of graph structures. The topology of a graph structure does not change under projection: in this way we solve the point correspondence problem by a subgraph matching algorithm between the detected 2D image graph and the model graph. There are four advantages to our method. First, the correspondence problem is solved entirely in 2D and therefore no stereo correspondence is needed. Consequently, we can use any number of cameras, including a single camera. Secondly, as opposed to stereo methods, we do not need to detect the same model point in two different cameras, and therefore our method is much more robust against occlusion. Thirdly, the subgraph matching algorithm can still detect a match even when parts of the graph are occluded, for example by the users hands. This also provides more robustness against occlusion. Finally, the error made in the pose estimation is significantly reduced as the amount of cameras is increased.
本文描述了一种新的用于虚拟和增强现实中交互设备姿态估计的光学跟踪算法。给定交互设备的三维模型和大量的相机图像,位姿重建的主要难点是找到二维图像点与三维模型点之间的对应关系。以前的大多数方法都是利用立体对应来解决这个问题的。一旦解决了对应问题,就可以通过确定三维点云和模型之间的转换来估计姿态。我们的方法是基于图结构的射影不变拓扑。图结构的拓扑结构在投影下不发生变化,通过检测到的二维图像图与模型图之间的子图匹配算法解决点对应问题。我们的方法有四个优点。首先,对应问题完全在二维中解决,因此不需要立体对应。因此,我们可以使用任意数量的摄像机,包括单个摄像机。其次,与立体方法相反,我们不需要在两个不同的相机中检测相同的模型点,因此我们的方法对遮挡的鲁棒性更强。第三,子图匹配算法即使在部分图被遮挡的情况下,例如被用户的手遮挡,仍然可以检测到匹配。这也提供了更多的抗遮挡的健壮性。最后,随着摄像机数量的增加,姿态估计中的误差显著降低。
{"title":"GraphTracker: a topology projection invariant optical tracker","authors":"F. Smit, A. V. Rhijn, R. V. Liere","doi":"10.2312/EGVE/EGVE06/063-070","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/063-070","url":null,"abstract":"In this paper, we describe a new optical tracking algorithm for pose estimation of interaction devices in virtual and augmented reality. Given a 3D model of the interaction device and a number of camera images, the primary difficulty in pose reconstruction is to find the correspondence between 2D image points and 3D model points. Most previous methods solved this problem by the use of stereo correspondence. Once the correspondence problem has been solved, the pose can be estimated by determining the transformation between the 3D point cloud and the model.\u0000 Our approach is based on the projective invariant topology of graph structures. The topology of a graph structure does not change under projection: in this way we solve the point correspondence problem by a subgraph matching algorithm between the detected 2D image graph and the model graph.\u0000 There are four advantages to our method. First, the correspondence problem is solved entirely in 2D and therefore no stereo correspondence is needed. Consequently, we can use any number of cameras, including a single camera. Secondly, as opposed to stereo methods, we do not need to detect the same model point in two different cameras, and therefore our method is much more robust against occlusion. Thirdly, the subgraph matching algorithm can still detect a match even when parts of the graph are occluded, for example by the users hands. This also provides more robustness against occlusion. Finally, the error made in the pose estimation is significantly reduced as the amount of cameras is increased.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134133279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A survey and taxonomy of 3D menu techniques 三维菜单技术的调查和分类
Raimund Dachselt, Anett Hübner
A huge variety of interaction techniques was developed in the field of virtual and augmented reality. Whereas techniques for object selection, manipulation, travel, and wayfinding were covered in existing taxonomies quite in detail, application control techniques were not sufficiently deliberated yet. However, they are needed by almost every mixed reality application, e.g. for choosing from alternative objects or options. For this purpose a great variety of distinct three-dimensional menu selection techniques is available. This paper surveys existing 3D menus from the corpus of literature and classifies them according to various criteria. The taxonomy introduced here assists developers of interactive 3D applications to better evaluate their options when choosing and implementing a 3D menu technique. Since the taxonomy spans the design space for 3D menu solutions, it also aids researchers in identifying opportunities to improve or create novel virtual menu techniques.
虚拟现实和增强现实领域开发了各种各样的交互技术。虽然对象选择、操作、移动和寻路技术在现有的分类法中已经非常详细地涵盖了,但应用程序控制技术还没有得到充分的考虑。然而,几乎每个混合现实应用程序都需要它们,例如从可选对象或选项中进行选择。为此,可以使用多种不同的三维菜单选择技术。本文对现有文献语料库中的三维菜单进行了调查,并根据各种标准对其进行了分类。这里介绍的分类法有助于交互式3D应用程序的开发人员在选择和实现3D菜单技术时更好地评估他们的选择。由于分类跨越了3D菜单解决方案的设计空间,它也帮助研究人员确定改进或创建新颖虚拟菜单技术的机会。
{"title":"A survey and taxonomy of 3D menu techniques","authors":"Raimund Dachselt, Anett Hübner","doi":"10.2312/EGVE/EGVE06/089-099","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/089-099","url":null,"abstract":"A huge variety of interaction techniques was developed in the field of virtual and augmented reality. Whereas techniques for object selection, manipulation, travel, and wayfinding were covered in existing taxonomies quite in detail, application control techniques were not sufficiently deliberated yet. However, they are needed by almost every mixed reality application, e.g. for choosing from alternative objects or options. For this purpose a great variety of distinct three-dimensional menu selection techniques is available. This paper surveys existing 3D menus from the corpus of literature and classifies them according to various criteria. The taxonomy introduced here assists developers of interactive 3D applications to better evaluate their options when choosing and implementing a 3D menu technique. Since the taxonomy spans the design space for 3D menu solutions, it also aids researchers in identifying opportunities to improve or create novel virtual menu techniques.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130381948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Friction surfaces: scaled ray-casting manipulation for interacting with 2D GUIs 摩擦表面:与2D gui交互的缩放光线投射操作
C. Andújar, F. Argelaguet
The accommodation of conventional 2D GUIs with Virtual Environments (VEs) can greatly enhance the possibilities of many VE applications. In this paper we present a variation of the well-known ray-casting technique for fast and accurate selection of 2D widgets over a virtual window immersed into a 3D world. The main idea is to provide a new interaction mode where hand rotations are scaled down so that the ray is constrained to intersect the active virtual window. This is accomplished by changing the control-display ratio between the orientation of the user's hand and the ray used for selection. Our technique uses a curved representation of the ray providing visual feedback of the orientation of both the input device and the selection ray. The users' feeling is that they control a flexible ray that gets curved as it moves over a virtual friction surface defined by the 2D window. We have implemented this technique and evaluated its effectiveness in terms of accuracy and performance. Our experiments on a four-sided CAVE indicate that the proposed technique can increase the speed and accuracy of component selection in 2D GUIs immersed into 3D worlds.
传统的2D图形用户界面与虚拟环境(VE)的融合可以极大地增强许多虚拟环境应用的可能性。在本文中,我们提出了一种众所周知的光线投射技术的变化,用于快速准确地选择沉浸在3D世界中的虚拟窗口上的2D小部件。主要的想法是提供一种新的交互模式,在这种模式中,手的旋转被缩小,这样光线就被限制在与活动的虚拟窗口相交的范围内。这是通过改变用户手的方向和用于选择的光线之间的控制-显示比率来实现的。我们的技术使用射线的曲线表示,提供输入设备和选择射线方向的视觉反馈。用户的感觉是,他们控制着一条弯曲的柔性光线,当它在由2D窗口定义的虚拟摩擦表面上移动时,它会弯曲。我们已经实现了这种技术,并评估了其准确性和性能方面的有效性。我们在四边CAVE上的实验表明,该技术可以提高沉浸在3D世界中的2D gui组件选择的速度和准确性。
{"title":"Friction surfaces: scaled ray-casting manipulation for interacting with 2D GUIs","authors":"C. Andújar, F. Argelaguet","doi":"10.2312/EGVE/EGVE06/101-108","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/101-108","url":null,"abstract":"The accommodation of conventional 2D GUIs with Virtual Environments (VEs) can greatly enhance the possibilities of many VE applications. In this paper we present a variation of the well-known ray-casting technique for fast and accurate selection of 2D widgets over a virtual window immersed into a 3D world. The main idea is to provide a new interaction mode where hand rotations are scaled down so that the ray is constrained to intersect the active virtual window. This is accomplished by changing the control-display ratio between the orientation of the user's hand and the ray used for selection. Our technique uses a curved representation of the ray providing visual feedback of the orientation of both the input device and the selection ray. The users' feeling is that they control a flexible ray that gets curved as it moves over a virtual friction surface defined by the 2D window. We have implemented this technique and evaluated its effectiveness in terms of accuracy and performance. Our experiments on a four-sided CAVE indicate that the proposed technique can increase the speed and accuracy of component selection in 2D GUIs immersed into 3D worlds.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133975617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Interactive data annotation in virtual environments 虚拟环境中的交互式数据注释
I. Assenmacher, B. Hentschel, C. Ni, T. Kuhlen, C. Bischof
Note-taking is an integral part of scientific data analysis. In particular, it is vital for explorative analysis, as the expression and transformation of ideas is a necessary precondition for gaining insight. However, in the case of interactive data exploration in virtual environments it is not possible to keep a pen and pencil at hand. Additionally, data analysis in virtual environments allows the multi-modal exploration of complex and time varying data. We propose the toolkit independent content generation system IDEA that features a defined process model, a generic annotation model with a variety of content types as well as specially developed interaction metaphors for their input and output handling. This allows the user to note ideas, e.g., in form of text, images or voice without interfering with the analysis process. In this paper we present the basic concepts for this system. We describe the context-content model which allows to tie annotation content to logical objects that are part of the scene and stores specific information for the special interaction in virtual environments. The IDEA system is already applied in a prototypical implementation for the exploration of air flows in the human nasal cavity where it is used for data analysis as well as interdisciplinary communication.
笔记是科学数据分析的重要组成部分。它对于探索性分析尤其重要,因为思想的表达和转化是获得洞察力的必要前提。然而,在虚拟环境中进行交互式数据探索的情况下,不可能随身携带笔和铅笔。此外,虚拟环境中的数据分析允许对复杂和时变数据进行多模式探索。我们提出了独立于工具箱的内容生成系统IDEA,该系统具有一个已定义的过程模型、一个具有各种内容类型的通用注释模型以及专门开发的用于其输入和输出处理的交互隐喻。这允许用户记录想法,例如,以文本,图像或声音的形式,而不会干扰分析过程。本文给出了该系统的基本概念。我们描述了上下文-内容模型,该模型允许将注释内容绑定到作为场景一部分的逻辑对象,并为虚拟环境中的特殊交互存储特定信息。IDEA系统已经应用于一个原型实现中,用于探索人类鼻腔中的气流,并用于数据分析和跨学科交流。
{"title":"Interactive data annotation in virtual environments","authors":"I. Assenmacher, B. Hentschel, C. Ni, T. Kuhlen, C. Bischof","doi":"10.2312/EGVE/EGVE06/119-126","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/119-126","url":null,"abstract":"Note-taking is an integral part of scientific data analysis. In particular, it is vital for explorative analysis, as the expression and transformation of ideas is a necessary precondition for gaining insight. However, in the case of interactive data exploration in virtual environments it is not possible to keep a pen and pencil at hand. Additionally, data analysis in virtual environments allows the multi-modal exploration of complex and time varying data. We propose the toolkit independent content generation system IDEA that features a defined process model, a generic annotation model with a variety of content types as well as specially developed interaction metaphors for their input and output handling. This allows the user to note ideas, e.g., in form of text, images or voice without interfering with the analysis process. In this paper we present the basic concepts for this system. We describe the context-content model which allows to tie annotation content to logical objects that are part of the scene and stores specific information for the special interaction in virtual environments. The IDEA system is already applied in a prototypical implementation for the exploration of air flows in the human nasal cavity where it is used for data analysis as well as interdisciplinary communication.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128291585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Colosseum3D: authoring framework for virtual environments Colosseum3D:虚拟环境的创作框架
A. Backman
This paper describes an authoring environment for real time 3D environments, Colosseum3D. The framework makes it possible to easily create rich virtual environments with rigid-body dynamics, advanced rendering using OpenGL Shaders, 3D sound and human avatars. The creative process of building complex simulators is supported by allowing several authoring paths such as a low level C++ API, an expressive high level file format and a scripting layer.
本文介绍了一个实时三维环境的创作环境——Colosseum3D。该框架可以轻松地创建丰富的虚拟环境,具有刚体动力学,使用OpenGL着色器的高级渲染,3D声音和人类化身。通过允许多种创作路径(如低级c++ API、具有表现力的高级文件格式和脚本层)来支持构建复杂模拟器的创造性过程。
{"title":"Colosseum3D: authoring framework for virtual environments","authors":"A. Backman","doi":"10.2312/EGVE/IPT_EGVE2005/225-226","DOIUrl":"https://doi.org/10.2312/EGVE/IPT_EGVE2005/225-226","url":null,"abstract":"This paper describes an authoring environment for real time 3D environments, Colosseum3D. The framework makes it possible to easily create rich virtual environments with rigid-body dynamics, advanced rendering using OpenGL Shaders, 3D sound and human avatars. The creative process of building complex simulators is supported by allowing several authoring paths such as a low level C++ API, an expressive high level file format and a scripting layer.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130535158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
IntenSelect: using dynamic object rating for assisting 3D object selection intselect:使用动态对象评级来辅助3D对象选择
G. D. Haan, M. Koutek, F. Post
We present IntenSelect, a novel selection technique that dynamically assists the user in the selection of 3D objects in Virtual Environments. Ray-casting selection is commonly used, although it has limited accuracy and can be problematic in more difficult situations where the intended selection object is occluded or moving. Selection-byvolume techniques, which extend normal ray-casting, provide error tolerance to cope with the limited accuracy. However, these extensions generally are not usable in the more complex selection situations.We have devised a new selection-by-volume technique to create a more flexible selection technique which can be used in these situations. To achieve this, we use a new scoring function to calculate the score of objects, which fall within a user controlled, conic selection volume. By accumulating these scores for the objects, we obtain a dynamic, time-dependent, object ranking. The highest ranking object, or active object, is indicated by bending the otherwise straight selection ray towards it. As the selection ray is effectively snapped to the object, the user can now select the object more easily. Our user tests indicate that IntenSelect can improve the selection performance over ray-casting, especially in the more difficult cases of small objects. Furthermore, the introduced time-dependent object ranking proves especially useful when objects are moving, occluded and/or cluttered. Our simple scoring scheme can be easily extended for special purpose interaction such as widget or application specific interaction functionality, which creates new possibilities for complex interaction behavior.
我们提出了intselect,一种新的选择技术,动态地帮助用户在虚拟环境中选择3D对象。光线投射选择是常用的,尽管它具有有限的准确性,并且在预期选择对象被遮挡或移动的更困难的情况下可能会出现问题。按体积选择技术扩展了常规光线投射,提供了误差容忍度,以应对有限的精度。然而,这些扩展通常不能用于更复杂的选择情况。我们设计了一种新的按体积选择技术,以创建一种更灵活的选择技术,可以在这些情况下使用。为了实现这一点,我们使用一个新的评分函数来计算对象的分数,这些对象属于用户控制的圆锥选择量。通过累积这些对象的分数,我们获得一个动态的、与时间相关的对象排名。最高级别的对象,或活动对象,是通过弯曲原本笔直的选择射线来表示的。由于选择光线被有效地捕捉到对象上,用户现在可以更容易地选择对象。我们的用户测试表明,intselect可以提高光线投射的选择性能,特别是在小物体的更困难的情况下。此外,引入的时间依赖对象排序被证明在对象移动、遮挡和/或混乱时特别有用。我们简单的计分方案可以很容易地扩展为特殊目的的交互,例如小部件或特定于应用程序的交互功能,这为复杂的交互行为创造了新的可能性。
{"title":"IntenSelect: using dynamic object rating for assisting 3D object selection","authors":"G. D. Haan, M. Koutek, F. Post","doi":"10.2312/EGVE/IPT_EGVE2005/201-209","DOIUrl":"https://doi.org/10.2312/EGVE/IPT_EGVE2005/201-209","url":null,"abstract":"We present IntenSelect, a novel selection technique that dynamically assists the user in the selection of 3D objects in Virtual Environments. Ray-casting selection is commonly used, although it has limited accuracy and can be problematic in more difficult situations where the intended selection object is occluded or moving. Selection-byvolume techniques, which extend normal ray-casting, provide error tolerance to cope with the limited accuracy. However, these extensions generally are not usable in the more complex selection situations.We have devised a new selection-by-volume technique to create a more flexible selection technique which can be used in these situations. To achieve this, we use a new scoring function to calculate the score of objects, which fall within a user controlled, conic selection volume. By accumulating these scores for the objects, we obtain a dynamic, time-dependent, object ranking. The highest ranking object, or active object, is indicated by bending the otherwise straight selection ray towards it. As the selection ray is effectively snapped to the object, the user can now select the object more easily. Our user tests indicate that IntenSelect can improve the selection performance over ray-casting, especially in the more difficult cases of small objects. Furthermore, the introduced time-dependent object ranking proves especially useful when objects are moving, occluded and/or cluttered. Our simple scoring scheme can be easily extended for special purpose interaction such as widget or application specific interaction functionality, which creates new possibilities for complex interaction behavior.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126240277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 112
期刊
International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1