首页 > 最新文献

International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments最新文献

英文 中文
Integrating Real-time Binaural Acoustics into VR Applications 将实时双耳声学集成到VR应用中
I. Assenmacher, T. Kuhlen, T. Lentz, M. Vorländer
Common research in the field of Virtual Reality (VR) considers acoustic stimulation as a highly important necessity for enhanced immersion into virtual scenes. However, most common VR toolkits do only marginally support the integration of sound for the application programmer. Furthermore, the quality of stimulation that is provided usually ranges from system sounds (e.g. beeps while selecting a menu) to simple 3D panning. In the latter case, these approaches do only allow the user to correctly detect sounds that are at quite a distance from his current position. Binaural synthesis is an interesting way to allow the spatial auditory representation by using few loudspeakers or headphones. This paper describes a system that combines the efforts of creating a binaural representation for the listener who is interacting in a common visual VR application in real-time, thus allowing the research on interaction between visual and auditory human perception systems. It will describe the theoretical background to establishing a binaural representation of a sound and the necessary hardware set-up for this. Afterwards, the infrastructure and software interface which will allow the connection of the audio renderer to a visual VR toolkit is discussed.
虚拟现实(VR)领域的常见研究认为声刺激是增强沉浸在虚拟场景中的一个非常重要的必要条件。然而,大多数常见的VR工具包只能勉强支持应用程序程序员的声音集成。此外,游戏所提供的刺激质量通常从系统声音(如选择菜单时的哔哔声)到简单的3D平移。在后一种情况下,这些方法只允许用户正确地探测到距离他当前位置相当远的声音。双耳合成是一种有趣的方法,通过使用少量扬声器或耳机来实现空间听觉表征。本文描述了一个系统,该系统结合了为听众创建双耳表示的努力,听众在一个常见的视觉VR应用中进行实时交互,从而允许研究视觉和听觉人类感知系统之间的交互。它将描述建立一个声音的双耳表示的理论背景和必要的硬件设置。随后,讨论了将音频渲染器连接到视觉VR工具包的基础设施和软件接口。
{"title":"Integrating Real-time Binaural Acoustics into VR Applications","authors":"I. Assenmacher, T. Kuhlen, T. Lentz, M. Vorländer","doi":"10.2312/EGVE/EGVE04/129-136","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE04/129-136","url":null,"abstract":"Common research in the field of Virtual Reality (VR) considers acoustic stimulation as a highly important necessity for enhanced immersion into virtual scenes. However, most common VR toolkits do only marginally support the integration of sound for the application programmer. Furthermore, the quality of stimulation that is provided usually ranges from system sounds (e.g. beeps while selecting a menu) to simple 3D panning. In the latter case, these approaches do only allow the user to correctly detect sounds that are at quite a distance from his current position. \u0000 \u0000Binaural synthesis is an interesting way to allow the spatial auditory representation by using few loudspeakers or headphones. This paper describes a system that combines the efforts of creating a binaural representation for the listener who is interacting in a common visual VR application in real-time, thus allowing the research on interaction between visual and auditory human perception systems. It will describe the theoretical background to establishing a binaural representation of a sound and the necessary hardware set-up for this. Afterwards, the infrastructure and software interface which will allow the connection of the audio renderer to a visual VR toolkit is discussed.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115626638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
ENVIRON - Visualization of CAD Models In a Virtual Reality Environment 虚拟现实环境中CAD模型的可视化
E. Corseuil, A. Raposo, Romano J. M. da Silva, Marcio H. G. Pinto, G. Wagner, M. Gattass
This paper presents ENVIRON (ENvironment for VIRtual Objects Navigation), an application that was developed motivated by the necessity of using Virtual Reality in large industrial engineering models coming from CAD (Computer Aided Design) tools. This work analyzes the main problems related to the production of a VR model, derived from the CAD model.
本文介绍了ENVIRON (ENvironment for VIRtual Objects Navigation,虚拟对象导航环境),这是一种应用程序,它是由在大型工业工程模型中使用来自CAD(计算机辅助设计)工具的虚拟现实的必要性而开发的。本工作分析了与CAD模型衍生的VR模型制作相关的主要问题。
{"title":"ENVIRON - Visualization of CAD Models In a Virtual Reality Environment","authors":"E. Corseuil, A. Raposo, Romano J. M. da Silva, Marcio H. G. Pinto, G. Wagner, M. Gattass","doi":"10.2312/EGVE/EGVE04/079-082","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE04/079-082","url":null,"abstract":"This paper presents ENVIRON (ENvironment for VIRtual Objects Navigation), an application that was developed motivated by the necessity of using Virtual Reality in large industrial engineering models coming from CAD (Computer Aided Design) tools. This work analyzes the main problems related to the production of a VR model, derived from the CAD model.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115684996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
A Real-Time System for Full Body Interaction with Virtual Worlds 一个与虚拟世界进行全身交互的实时系统
Jean-Marc Hasenfratz, M. Lapierre, F. Sillion
Real-time video acquisition is becoming a reality with the most recent camera technology. Three-dimensional models can be reconstructed from multiple views using visual hull carving techniques. However the combination of these approaches to obtain a moving 3D model from simultaneous video captures remains a technological challenge. In this paper we demonstrate a complete system architecture allowing the real-time (≤ 30 fps) acquisition and full-body reconstruction of one or several actors, which can then be integrated in a virtual environment. A volume of approximately 2m3 is observed with (at least) four video cameras and the video fluxes are processed to obtain a volumetric model of the moving actors. The reconstruction process uses a mixture of pipelined and parallel processing, using N individual PCs for N cameras and a central computer for integration, reconstruction and display. A surface description is obtained using a marching cubes algorithm. We discuss the overall architecture choices, with particular emphasis on the real-time constraint and latency issues, and demonstrate that a software synchronization of the video fluxes is both sufficient and efficient. The ability to reconstruct a full-body model of the actors and any additional props or elements opens the way for very natural interaction techniques using the entire body and real elements manipulated by the user, whose avatar is immersed in a virtual world.
借助最新的摄像技术,实时视频采集正在成为现实。利用视觉船体雕刻技术可以从多个视图重建三维模型。然而,结合这些方法从同步视频捕获中获得移动的3D模型仍然是一个技术挑战。在本文中,我们展示了一个完整的系统架构,允许实时(≤30 fps)采集和一个或多个参与者的全身重建,然后可以集成到虚拟环境中。用(至少)四个摄像机观察大约2m3的体积,并对视频通量进行处理,以获得运动演员的体积模型。重建过程采用流水线和并行处理的混合处理,使用N台单独的pc进行N台摄像机和一台中央计算机进行集成,重建和显示。利用行军立方算法获得表面描述。我们讨论了整体架构的选择,特别强调了实时约束和延迟问题,并证明了视频流的软件同步是充分和有效的。重建演员的全身模型和任何额外的道具或元素的能力为非常自然的交互技术开辟了道路,使用整个身体和由用户操纵的真实元素,用户的化身沉浸在虚拟世界中。
{"title":"A Real-Time System for Full Body Interaction with Virtual Worlds","authors":"Jean-Marc Hasenfratz, M. Lapierre, F. Sillion","doi":"10.2312/EGVE/EGVE04/147-156","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE04/147-156","url":null,"abstract":"Real-time video acquisition is becoming a reality with the most recent camera technology. Three-dimensional models can be reconstructed from multiple views using visual hull carving techniques. However the combination of these approaches to obtain a moving 3D model from simultaneous video captures remains a technological challenge. In this paper we demonstrate a complete system architecture allowing the real-time (≤ 30 fps) acquisition and full-body reconstruction of one or several actors, which can then be integrated in a virtual environment. A volume of approximately 2m3 is observed with (at least) four video cameras and the video fluxes are processed to obtain a volumetric model of the moving actors. The reconstruction process uses a mixture of pipelined and parallel processing, using N individual PCs for N cameras and a central computer for integration, reconstruction and display. A surface description is obtained using a marching cubes algorithm. We discuss the overall architecture choices, with particular emphasis on the real-time constraint and latency issues, and demonstrate that a software synchronization of the video fluxes is both sufficient and efficient. The ability to reconstruct a full-body model of the actors and any additional props or elements opens the way for very natural interaction techniques using the entire body and real elements manipulated by the user, whose avatar is immersed in a virtual world.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132162962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 63
Using Saccadic Suppression to Hide Graphic Updates 使用跳动抑制隐藏图形更新
J. Schumacher, R. Allison, R. Herpers
In interactive graphics it is often necessary to introduce large changes in the image in response to updated information about the state of the system. Updating the local state immediately would lead to a sudden transient change in the image, which could be perceptually disruptive. However, introducing the correction gradually using smoothing operations increases latency and degrades precision. It would be beneficial to be able to introduce graphic updates immediately if they were not perceptible. In the paper the use of saccade-contingent updates is exploited to hide graphic updates during the period of visual suppression that accompanies a rapid, or saccadic, eye movement. Sensitivity to many visual stimuli is known to be reduced during a change in fixation compared to when the eye is still. For example, motion of a small object is harder to detect during a rapid eye movement (saccade) than during a fixation. To evaluate if these findings generalize to large scene changes in a virtual environment, gaze behavior in a 180 degree hemispherical display was recorded and analyzed. This data was used to develop a saccade detection algorithm adapted to virtual environments. The detectability of trans-saccadic scene changes was evaluated using images of high resolution real world scenes. The images were translated by 0.4, 0.8 or 1.2 degrees of visual angle during horizontal saccades. The scene updates were rarely noticeable for saccades with a duration greater than 58 ms. The detection rate for the smallest translation was just 6.25%. Qualitatively, even when trans-saccadic scene changes were detectible, they were much less disturbing than equivalent changes in the absence of a saccade.
在交互式图形中,经常需要在图像中引入较大的变化,以响应有关系统状态的更新信息。立即更新局部状态将导致图像中的突然瞬时变化,这可能会在感知上造成破坏。然而,使用平滑操作逐渐引入校正会增加延迟并降低精度。这将是有益的,能够引入图形更新立即,如果他们是不可察觉的。在这篇论文中,利用视跳随动更新来隐藏伴随快速或视跳眼球运动的视觉抑制期间的图形更新。与眼睛静止时相比,在注视变化期间,对许多视觉刺激的敏感性降低。例如,在快速眼球运动(扫视)中,小物体的运动比注视时更难检测。为了评估这些发现是否适用于虚拟环境中的大场景变化,我们记录并分析了180度半球形显示器中的凝视行为。这些数据被用于开发一种适用于虚拟环境的眼动检测算法。利用高分辨率真实场景图像,评估了跨眼球场景变化的可检测性。在水平扫视时,图像被平移了0.4、0.8或1.2度的视角。对于持续时间大于58毫秒的扫视,场景更新很少被注意到。最小翻译的检出率仅为6.25%。从质量上讲,即使可以检测到跨扫视的场景变化,它们也比没有扫视时的相同变化要小得多。
{"title":"Using Saccadic Suppression to Hide Graphic Updates","authors":"J. Schumacher, R. Allison, R. Herpers","doi":"10.2312/EGVE/EGVE04/017-024","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE04/017-024","url":null,"abstract":"In interactive graphics it is often necessary to introduce large changes in the image in response to updated information about the state of the system. Updating the local state immediately would lead to a sudden transient change in the image, which could be perceptually disruptive. However, introducing the correction gradually using smoothing operations increases latency and degrades precision. It would be beneficial to be able to introduce graphic updates immediately if they were not perceptible. In the paper the use of saccade-contingent updates is exploited to hide graphic updates during the period of visual suppression that accompanies a rapid, or saccadic, eye movement. \u0000 \u0000Sensitivity to many visual stimuli is known to be reduced during a change in fixation compared to when the eye is still. For example, motion of a small object is harder to detect during a rapid eye movement (saccade) than during a fixation. To evaluate if these findings generalize to large scene changes in a virtual environment, gaze behavior in a 180 degree hemispherical display was recorded and analyzed. This data was used to develop a saccade detection algorithm adapted to virtual environments. The detectability of trans-saccadic scene changes was evaluated using images of high resolution real world scenes. The images were translated by 0.4, 0.8 or 1.2 degrees of visual angle during horizontal saccades. The scene updates were rarely noticeable for saccades with a duration greater than 58 ms. The detection rate for the smallest translation was just 6.25%. Qualitatively, even when trans-saccadic scene changes were detectible, they were much less disturbing than equivalent changes in the absence of a saccade.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133507127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Multi-Finger Haptic Rendering of Deformable Objects 可变形物体的多指触觉渲染
Anderson Maciel, Sofiane Sarni, Olivier Buchwalder, R. Boulic, D. Thalmann
The present paper describes the integration of a multi-finger haptic device with deformable objects in an interactive environment. Repulsive forces are synthesized and rendered independently for each finger of a user wearing a Cybergrasp force-feedback glove. Deformation and contact models are based on mass-spring systems, and the issue of the user independence is dealt with through a geometric calibration phase. Motivated by the knowledge that human hand plays a very important role in the somatosensory system, we focused on the potential of the Cybergrasp device to improve perception in Virtual Reality worlds. We especially explored whether it is possible to distinguish objects with different elasticities. Results of performance and perception tests are encouraging despite current technical and computational limitations.
本文描述了一个多指触觉装置与可变形物体在交互环境中的集成。对于佩戴Cybergrasp力反馈手套的用户,每个手指的斥力都是独立合成和渲染的。变形和接触模型基于质量-弹簧系统,并且通过几何校准阶段处理用户独立性问题。由于认识到人的手在体感系统中扮演着非常重要的角色,我们专注于Cybergrasp设备在虚拟现实世界中提高感知能力的潜力。我们特别探讨了是否有可能区分具有不同弹性的物体。尽管目前的技术和计算能力有限,但性能和感知测试的结果令人鼓舞。
{"title":"Multi-Finger Haptic Rendering of Deformable Objects","authors":"Anderson Maciel, Sofiane Sarni, Olivier Buchwalder, R. Boulic, D. Thalmann","doi":"10.2312/EGVE/EGVE04/105-112","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE04/105-112","url":null,"abstract":"The present paper describes the integration of a multi-finger haptic device with deformable objects in an interactive environment. Repulsive forces are synthesized and rendered independently for each finger of a user wearing a Cybergrasp force-feedback glove. Deformation and contact models are based on mass-spring systems, and the issue of the user independence is dealt with through a geometric calibration phase. Motivated by the knowledge that human hand plays a very important role in the somatosensory system, we focused on the potential of the Cybergrasp device to improve perception in Virtual Reality worlds. We especially explored whether it is possible to distinguish objects with different elasticities. Results of performance and perception tests are encouraging despite current technical and computational limitations.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122539467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
An Experimental Comparison of Three Optical Trackers for Model Based Pose Determination in Virtual Reality 三种光学跟踪器在虚拟现实中基于模型姿态确定的实验比较
R. V. Liere, A. V. Rhijn
In recent years many optical trackers have been proposed for usage in Virtual Environments. In this paper, we compare three model based optical tracking algorithms for pose determination of input devices. In particular, we study the behavior of these algorithms when applied to two-handed manipulation tasks. We experimentally show how critical parameters influence the relative accuracy, latency and robustness of each algorithm. Although the study has been performed in a specific near-field virtual environment, the results can be applied to other virtual environments such as workbenches and CAVEs.
近年来,人们提出了许多用于虚拟环境的光学跟踪器。在本文中,我们比较了三种基于模型的光学跟踪算法用于输入设备的姿态确定。特别地,我们研究了这些算法在应用于双手操作任务时的行为。我们通过实验证明了关键参数如何影响每个算法的相对精度、延迟和鲁棒性。虽然该研究是在特定的近场虚拟环境中进行的,但其结果可以应用于其他虚拟环境,如工作台和洞穴。
{"title":"An Experimental Comparison of Three Optical Trackers for Model Based Pose Determination in Virtual Reality","authors":"R. V. Liere, A. V. Rhijn","doi":"10.2312/EGVE/EGVE04/025-034","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE04/025-034","url":null,"abstract":"In recent years many optical trackers have been proposed for usage in Virtual Environments. In this paper, we compare three model based optical tracking algorithms for pose determination of input devices. In particular, we study the behavior of these algorithms when applied to two-handed manipulation tasks. We experimentally show how critical parameters influence the relative accuracy, latency and robustness of each algorithm. Although the study has been performed in a specific near-field virtual environment, the results can be applied to other virtual environments such as workbenches and CAVEs.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130881120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Medical Augmented Reality based on Commercial Image Guided Surgery 基于商业图像引导手术的医学增强现实
J. Fischer, M. Neff, D. Freudenstein, D. Bartz
Utilizing augmented reality for applications in medicine has been a topic of intense research for several years. A number of challenging tasks need to be addressed when designing a medical AR system. These include the import and management of medical datasets and preoperatively created planning data, the registration of the patient with respect to a global coordinate system, and accurate tracking of the camera used in the AR setup as well as the respective surgical instruments. Most research systems rely on specialized hardware or algorithms for realizing augmented reality in medicine. Such base technologies can be expensive or very time-consuming to implement. In this paper, we propose an alternative approach of building a surgical AR system by harnessing existing, commercially available equipment for image guided surgery (IGS). We describe the prototype of an augmented reality application, which receives all necessary information from a device for intraoperative navigation.
多年来,增强现实技术在医学领域的应用一直是人们研究的热点。在设计医疗AR系统时,需要解决许多具有挑战性的任务。其中包括医疗数据集和术前创建的计划数据的导入和管理,患者的全球坐标系统注册,以及AR设置中使用的相机以及相应的手术器械的准确跟踪。大多数研究系统依赖于专门的硬件或算法来实现医学中的增强现实。这种基础技术的实现可能非常昂贵或非常耗时。在本文中,我们提出了另一种方法,通过利用现有的商业上可用于图像引导手术(IGS)的设备来构建外科AR系统。我们描述了一个增强现实应用的原型,它从一个设备接收所有必要的信息,用于术中导航。
{"title":"Medical Augmented Reality based on Commercial Image Guided Surgery","authors":"J. Fischer, M. Neff, D. Freudenstein, D. Bartz","doi":"10.2312/EGVE/EGVE04/083-086","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE04/083-086","url":null,"abstract":"Utilizing augmented reality for applications in medicine has been a topic of intense research for several years. A number of challenging tasks need to be addressed when designing a medical AR system. These include the import and management of medical datasets and preoperatively created planning data, the registration of the patient with respect to a global coordinate system, and accurate tracking of the camera used in the AR setup as well as the respective surgical instruments. Most research systems rely on specialized hardware or algorithms for realizing augmented reality in medicine. Such base technologies can be expensive or very time-consuming to implement. In this paper, we propose an alternative approach of building a surgical AR system by harnessing existing, commercially available equipment for image guided surgery (IGS). We describe the prototype of an augmented reality application, which receives all necessary information from a device for intraoperative navigation.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126228660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 64
Foveated Stereoscopic Display for the Visualization of Detailed Virtual Environments 聚焦立体显示技术在虚拟环境可视化中的应用
G. Godin, P. Massicotte, L. Borgeat
We present a new method for the stereoscopic display of complex virtual environments using a foveated arrangement of four images. The system runs on four rendering nodes and four projectors, for the fovea and periphery in each eye view. The use of high-resolution insets in a foveated configuration is well known. However, its extension to projector-based stereoscopic displays raises a specific issue: the visible boundary between fovea and periphery present in each eye creates a stereoscopic cue that may conflict with the perceived depth of the underlying scene. A previous solution to this problem displaces the boundary in the images to ensure that it is always positioned over stereoscopically corresponding scene locations. The new method proposed here addresses the same problem, but by relaxing the stereo matching criteria and reformulating the problem as one of spatial partitioning, all computations are performed locally on each node, and require a small and fixed amount of post-rendering processing, independent of scene complexity. We discuss this solution and present an OpenGL implementation; we also discuss acceleration techniques using culling and fragments, and illustrate the use of the method on a complex 3D textured model of a Byzantine crypt built using laser range imaging and digital photography.
我们提出了一种新的方法来立体显示复杂的虚拟环境,使用四个图像的注视点排列。该系统运行在四个渲染节点和四个投影仪上,用于每个视角的中央窝和外围。在注视点配置中使用高分辨率插入是众所周知的。然而,将其扩展到基于投影仪的立体显示提出了一个具体的问题:每只眼睛中存在的中央窝和外围之间的可见边界产生了立体线索,可能与感知到的底层场景深度相冲突。之前解决这个问题的方法是置换图像中的边界,以确保它始终位于立体对应的场景位置上。本文提出的新方法解决了同样的问题,但通过放宽立体匹配标准并将问题重新表述为空间划分问题,所有计算都在每个节点上局部执行,并且需要少量固定的渲染后处理,与场景复杂性无关。我们讨论了这个解决方案,并提出了一个OpenGL实现;我们还讨论了使用剔除和碎片的加速技术,并说明了使用激光测距成像和数码摄影建立的拜占庭地穴的复杂3D纹理模型的方法。
{"title":"Foveated Stereoscopic Display for the Visualization of Detailed Virtual Environments","authors":"G. Godin, P. Massicotte, L. Borgeat","doi":"10.2312/EGVE/EGVE04/007-016","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE04/007-016","url":null,"abstract":"We present a new method for the stereoscopic display of complex virtual environments using a foveated arrangement of four images. The system runs on four rendering nodes and four projectors, for the fovea and periphery in each eye view. The use of high-resolution insets in a foveated configuration is well known. However, its extension to projector-based stereoscopic displays raises a specific issue: the visible boundary between fovea and periphery present in each eye creates a stereoscopic cue that may conflict with the perceived depth of the underlying scene. A previous solution to this problem displaces the boundary in the images to ensure that it is always positioned over stereoscopically corresponding scene locations. The new method proposed here addresses the same problem, but by relaxing the stereo matching criteria and reformulating the problem as one of spatial partitioning, all computations are performed locally on each node, and require a small and fixed amount of post-rendering processing, independent of scene complexity. We discuss this solution and present an OpenGL implementation; we also discuss acceleration techniques using culling and fragments, and illustrate the use of the method on a complex 3D textured model of a Byzantine crypt built using laser range imaging and digital photography.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121768201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Digital Mock-up database simplification with the help of view and application dependent criteria for industrial Virtual Reality application 工业虚拟现实应用中基于视图和应用相关标准的数字模型数据库简化
Marc Chevaldonné, M. Neveu, F. Mérienne, M. Dureigne, N. Chevassus, F. Guillaume
Aircraft cockpits are advanced interfaces dedicated to the interaction and exchange of observations and commands between the pilot and the flying system. The design process of cockpits is benefiting from the use of Virtual Reality technologies: early ergonomics and layout analysis through the exploration of numerous alternatives, availability all along the cockpit life cycle of a virtual product ready for experimentation, reduced usage of costly physical mock-ups. Nevertheless, the construction of a virtual cockpit with the adequate performances is very complex. Due to the fact that the CAD based digital mock-up used for setting up the virtual cockpit is very large, one challenge is to achieve interactivity while maintaining the quality of rendering. The reduction of the information contained in the CAD database shall achieve a sufficient frame rate without degradation of the geometrical visual quality of the virtual cockpit which would alleviate the relevance of ergonomics and layout studies. This paper proposes to control the simplification process by using objective criteria based on considerations about the cockpit application and the visual performances of human beings. First, it presents the results of studies on the characteristics of the Human Visual System linked to virtual reality and visualization applications. Illustrated by first results, it establishes how to control simplifications in a rational and automatic way.
飞机驾驶舱是高级接口,专门用于飞行员和飞行系统之间的交互和交换观察和命令。驾驶舱的设计过程受益于虚拟现实技术的使用:通过探索众多替代方案进行早期人体工程学和布局分析,整个驾驶舱生命周期的虚拟产品可用性准备好进行实验,减少了昂贵的物理模型的使用。然而,一个具有足够性能的虚拟座舱的构建是非常复杂的。由于用于建立虚拟座舱的基于CAD的数字模型非常大,一个挑战是在保持渲染质量的同时实现交互性。CAD数据库中所载信息的减少应达到足够的帧率,而不降低虚拟座舱的几何视觉质量,这将减轻人体工程学和布局研究的相关性。本文提出在考虑座舱应用和人的视觉表现的基础上,采用客观标准控制简化过程。首先,它介绍了与虚拟现实和可视化应用相关的人类视觉系统特征的研究结果。通过初步结果说明了如何以合理和自动的方式控制简化。
{"title":"Digital Mock-up database simplification with the help of view and application dependent criteria for industrial Virtual Reality application","authors":"Marc Chevaldonné, M. Neveu, F. Mérienne, M. Dureigne, N. Chevassus, F. Guillaume","doi":"10.2312/EGVE/EGVE04/113-122","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE04/113-122","url":null,"abstract":"Aircraft cockpits are advanced interfaces dedicated to the interaction and exchange of observations and commands between the pilot and the flying system. The design process of cockpits is benefiting from the use of Virtual Reality technologies: early ergonomics and layout analysis through the exploration of numerous alternatives, availability all along the cockpit life cycle of a virtual product ready for experimentation, reduced usage of costly physical mock-ups. \u0000 \u0000Nevertheless, the construction of a virtual cockpit with the adequate performances is very complex. Due to the fact that the CAD based digital mock-up used for setting up the virtual cockpit is very large, one challenge is to achieve interactivity while maintaining the quality of rendering. The reduction of the information contained in the CAD database shall achieve a sufficient frame rate without degradation of the geometrical visual quality of the virtual cockpit which would alleviate the relevance of ergonomics and layout studies. \u0000 \u0000This paper proposes to control the simplification process by using objective criteria based on considerations about the cockpit application and the visual performances of human beings. First, it presents the results of studies on the characteristics of the Human Visual System linked to virtual reality and visualization applications. Illustrated by first results, it establishes how to control simplifications in a rational and automatic way.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116907920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Lateral Head Tracking in Desktop Virtual Reality 桌面虚拟现实中的横向头部跟踪
Breght R. Boschker, J. D. Mulder
Head coupled perspective is often considered to be an essential aspect of stereoscopic desktop virtual reality (VR) systems. Such systems use a tracking device to determine the user's head pose in up to six degrees of freedom (DOF). Users of desktop VR systems perform their task while sitting down and therefore the extent of head movements is limited. This paper investigates the validity of using a head tracking system for desktop VR that only tracks lateral head movement. Users performed a depth estimation task under full (six DOF) head tracking, lateral head tracking, and disabled head tracking. Furthermore, we considered stereoscopic and monoscopic viewing. Our results show that user performance was not significantly affected when incorporating only lateral head motion. Both lateral and full head tracking performed better than the disabled head tracking case.
头部耦合视角通常被认为是立体桌面虚拟现实(VR)系统的一个重要方面。这种系统使用跟踪装置来确定用户的头部姿势,最多可达6个自由度。桌面虚拟现实系统的用户在坐着时执行任务,因此头部运动的程度是有限的。本文研究了仅跟踪头部侧向运动的桌面VR头部跟踪系统的有效性。用户在全(六自由度)头部跟踪、侧向头部跟踪和禁用头部跟踪下执行深度估计任务。此外,我们考虑了立体和单镜观察。我们的研究结果表明,当只考虑侧向头部运动时,用户的表现不会受到显著影响。侧位和全头跟踪均优于失能头部跟踪。
{"title":"Lateral Head Tracking in Desktop Virtual Reality","authors":"Breght R. Boschker, J. D. Mulder","doi":"10.2312/EGVE/EGVE04/045-052","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE04/045-052","url":null,"abstract":"Head coupled perspective is often considered to be an essential aspect of stereoscopic desktop virtual reality (VR) systems. Such systems use a tracking device to determine the user's head pose in up to six degrees of freedom (DOF). Users of desktop VR systems perform their task while sitting down and therefore the extent of head movements is limited. This paper investigates the validity of using a head tracking system for desktop VR that only tracks lateral head movement. Users performed a depth estimation task under full (six DOF) head tracking, lateral head tracking, and disabled head tracking. Furthermore, we considered stereoscopic and monoscopic viewing. Our results show that user performance was not significantly affected when incorporating only lateral head motion. Both lateral and full head tracking performed better than the disabled head tracking case.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126787367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1