首页 > 最新文献

2009 IEEE Virtual Reality Conference最新文献

英文 中文
Spatialized Haptic Rendering: Providing Impact Position Information in 6DOF Haptic Simulations Using Vibrations 空间化触觉渲染:在使用振动的6DOF触觉模拟中提供冲击位置信息
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4810990
Jean Sreng, A. Lécuyer, C. Andriot, B. Arnaldi
In this paper we introduce a "Spatialized Haptic Rendering" technique to enhance 6DOF haptic manipulation of virtual objects with impact position information using vibrations. This rendering technique uses our perceptive ability to determine the contact position by using the vibrations generated by the impact. In particular, the different vibrations generated by a beam are used to convey the impact position information. We present two experiments conducted to tune and evaluate our spatialized haptic rendering technique. The first experiment investigates the vibration parameters (amplitudes/frequencies) needed to enable an efficient discrimination of the force patterns used for spatialized haptic rendering. The second experiment is an evaluation of spatialized haptic rendering during 6DOF manipulation. Taken together, the results suggest that spatialized haptic rendering can be used to improve the haptic perception of impact position in complex 6DOF interactions.
本文介绍了一种“空间化触觉渲染”技术,利用振动来增强虚拟物体的6DOF触觉操纵。这种渲染技术使用我们的感知能力,通过使用撞击产生的振动来确定接触位置。特别地,利用梁产生的不同振动来传递冲击位置信息。我们提出了两个实验来调整和评估我们的空间化触觉渲染技术。第一个实验研究了有效识别用于空间化触觉渲染的力模式所需的振动参数(振幅/频率)。第二个实验是对六自由度操作过程中空间化触觉渲染的评价。综上所述,空间化触觉渲染可用于改善复杂6DOF交互中冲击位置的触觉感知。
{"title":"Spatialized Haptic Rendering: Providing Impact Position Information in 6DOF Haptic Simulations Using Vibrations","authors":"Jean Sreng, A. Lécuyer, C. Andriot, B. Arnaldi","doi":"10.1109/VR.2009.4810990","DOIUrl":"https://doi.org/10.1109/VR.2009.4810990","url":null,"abstract":"In this paper we introduce a \"Spatialized Haptic Rendering\" technique to enhance 6DOF haptic manipulation of virtual objects with impact position information using vibrations. This rendering technique uses our perceptive ability to determine the contact position by using the vibrations generated by the impact. In particular, the different vibrations generated by a beam are used to convey the impact position information. We present two experiments conducted to tune and evaluate our spatialized haptic rendering technique. The first experiment investigates the vibration parameters (amplitudes/frequencies) needed to enable an efficient discrimination of the force patterns used for spatialized haptic rendering. The second experiment is an evaluation of spatialized haptic rendering during 6DOF manipulation. Taken together, the results suggest that spatialized haptic rendering can be used to improve the haptic perception of impact position in complex 6DOF interactions.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115285457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A virtual reality claustrophobia therapy system - implementation and test 虚拟现实幽闭恐惧症治疗系统的实现与测试
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811020
M. Bruce, H. Regenbrecht
Virtual reality exposure therapy (VRET) is becoming an increasing commonplace technique for the treatment of a wide range of psychological disorders, such as phobias. Effective virtual reality systems are suggested to invoke presence, which in term elicits an emotional response, helping to lead a successful treatment outcome. However, a number of problems are apparent: (1) the expense of traditional virtual reality systems hampers their widespread adoption; (2) the depth of research into several disorders is still limited in depth; and (3) the understanding of presence and its relation to delivery mechanism and treatment outcome is still not entirely understood. We implemented and experimentally investigated an immersive VRET prototype system for the treatment of claustrophobia, a system that combines affordability, robustness and practicality while providing presence and effectiveness in treatment. The prototype system was heuristically evaluated and a controlled treatment scenario experiment using a non-clinical sample was performed. In the following, we describe the background, system concept and implementation, the tests and future directions.
虚拟现实暴露疗法(VRET)正在成为一种越来越普遍的技术,用于治疗各种心理障碍,如恐惧症。有效的虚拟现实系统被建议调用在场,这在某种程度上引发了一种情绪反应,有助于导致成功的治疗结果。然而,一些问题是显而易见的:(1)传统虚拟现实系统的成本阻碍了它们的广泛采用;(2)对几种疾病的研究深度仍然有限;(3)对存在的理解及其与递送机制和治疗结果的关系仍不完全清楚。我们实施并实验研究了用于治疗幽闭恐惧症的沉浸式VRET原型系统,该系统结合了可负担性,稳健性和实用性,同时提供了治疗的存在性和有效性。对原型系统进行启发式评估,并使用非临床样本进行对照治疗方案实验。在下面,我们描述了背景,系统的概念和实现,测试和未来的方向。
{"title":"A virtual reality claustrophobia therapy system - implementation and test","authors":"M. Bruce, H. Regenbrecht","doi":"10.1109/VR.2009.4811020","DOIUrl":"https://doi.org/10.1109/VR.2009.4811020","url":null,"abstract":"Virtual reality exposure therapy (VRET) is becoming an increasing commonplace technique for the treatment of a wide range of psychological disorders, such as phobias. Effective virtual reality systems are suggested to invoke presence, which in term elicits an emotional response, helping to lead a successful treatment outcome. However, a number of problems are apparent: (1) the expense of traditional virtual reality systems hampers their widespread adoption; (2) the depth of research into several disorders is still limited in depth; and (3) the understanding of presence and its relation to delivery mechanism and treatment outcome is still not entirely understood. We implemented and experimentally investigated an immersive VRET prototype system for the treatment of claustrophobia, a system that combines affordability, robustness and practicality while providing presence and effectiveness in treatment. The prototype system was heuristically evaluated and a controlled treatment scenario experiment using a non-clinical sample was performed. In the following, we describe the background, system concept and implementation, the tests and future directions.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127151288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
Evaluating the Influence of Haptic Force-Feedback on 3D Selection Tasks using Natural Egocentric Gestures 评估触觉力反馈对使用自然自我中心手势的3D选择任务的影响
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4810992
V. Pawar, A. Steed
Immersive Virtual Environments (IVEs) allow participants to interact with their 3D surroundings using natural hand gestures. Previous work shows that the addition of haptic feedback cues improves performance on certain 3D tasks. However, we believe this is not true for all situations. Depending on the difficulty of the task, we suggest that we should expect differences in the ballistic movement of our hands when presented with different types of haptic force-feedback conditions. We investigated how hard, soft and no haptic force-feedback responses, experienced when in contact with the surface of an object, affected user performance on a task involving selection of multiple targets. To do this, we implemented a natural egocentric selection interaction technique by integrating a two-handed large-scale force-feedback device in to a CAVETM-like IVE system. With this, we performed a user study where we show that participants perform selection tasks best when interacting with targets that exert soft haptic force-feedback cues. For targets that have hard and no force-feedback properties, we highlight certain associated hand movement that participants make under these conditions, that we hypothesise reduce their performance.
沉浸式虚拟环境(IVEs)允许参与者使用自然手势与他们的3D环境进行交互。先前的研究表明,添加触觉反馈线索可以提高某些3D任务的性能。然而,我们认为并非所有情况都是如此。根据任务的难度,我们建议我们应该期望在不同类型的触觉力反馈条件下,我们的手的弹道运动是不同的。我们研究了在与物体表面接触时所经历的硬、软和无触觉力反馈反应如何影响用户在涉及多个目标选择的任务中的表现。为了做到这一点,我们通过将一个双手大规模力反馈装置集成到一个类似cavetm的IVE系统中,实现了一种自然的自我中心选择交互技术。有了这个,我们进行了一项用户研究,我们发现参与者在与施加软触觉力反馈提示的目标互动时,可以最好地执行选择任务。对于具有硬的和没有力反馈特性的目标,我们强调参与者在这些条件下做出的某些相关的手部运动,我们假设这会降低他们的表现。
{"title":"Evaluating the Influence of Haptic Force-Feedback on 3D Selection Tasks using Natural Egocentric Gestures","authors":"V. Pawar, A. Steed","doi":"10.1109/VR.2009.4810992","DOIUrl":"https://doi.org/10.1109/VR.2009.4810992","url":null,"abstract":"Immersive Virtual Environments (IVEs) allow participants to interact with their 3D surroundings using natural hand gestures. Previous work shows that the addition of haptic feedback cues improves performance on certain 3D tasks. However, we believe this is not true for all situations. Depending on the difficulty of the task, we suggest that we should expect differences in the ballistic movement of our hands when presented with different types of haptic force-feedback conditions. We investigated how hard, soft and no haptic force-feedback responses, experienced when in contact with the surface of an object, affected user performance on a task involving selection of multiple targets. To do this, we implemented a natural egocentric selection interaction technique by integrating a two-handed large-scale force-feedback device in to a CAVETM-like IVE system. With this, we performed a user study where we show that participants perform selection tasks best when interacting with targets that exert soft haptic force-feedback cues. For targets that have hard and no force-feedback properties, we highlight certain associated hand movement that participants make under these conditions, that we hypothesise reduce their performance.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"68 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128723394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Image Blending and View Clustering for Multi-Viewer Immersive Projection Environments 多观察者沉浸式投影环境的图像混合和视图聚类
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4810998
J. Marbach
Investment into multi-wall Immersive Virtual Environments is often motivated by the potential for small groups of users to work collaboratively, yet most systems only allow for stereographic rendering from a single viewpoint. This paper discusses approaches for supporting copresent head-tracked users in an immersive projection environment, such as the CAVETM, without relying on additional projection and frame-multiplexing technology. The primary technique presented here is called Image Blending and consists of rendering independent views for each head-tracked user to an off-screen buffer and blending the images into a final composite view using view-vector incidence angles as weighting factors. Additionally, users whose view-vectors intersect a projection screen at similar locations are grouped into a view-cluster. Clustered user views are rendered from the average head position and orientation of all users in that cluster. The clustering approach minimizes users' exposure to undesirable display artifacts such as inverted stereo pairs and nonlinear object projections by distributing projection error over all tracked viewers. These techniques have the added advantage that they can be easily integrated into existing systems with minimally increased hardware and software requirements. We compare Image Blending and View Clustering with previously published techniques and discuss possible implementation optimizations and their tradeoffs.
对多墙沉浸式虚拟环境的投资通常是出于小群体用户协同工作的潜力,然而大多数系统只允许从单一视点进行立体渲染。本文讨论了在沉浸式投影环境中支持当前头部跟踪用户的方法,例如CAVETM,而不依赖于额外的投影和帧复用技术。这里介绍的主要技术称为图像混合,包括将每个头部跟踪用户的独立视图渲染到屏幕外缓冲区,并使用视图矢量入射角作为加权因子将图像混合到最终的合成视图中。此外,视图向量在相似位置与投影屏幕相交的用户被分组到视图簇中。集群用户视图是根据该集群中所有用户的平均头部位置和方向呈现的。聚类方法通过将投影误差分布在所有跟踪的观看者上,最大限度地减少用户暴露于不良显示工件(如倒置的立体对和非线性对象投影)的可能性。这些技术还有一个额外的优点,就是它们可以很容易地集成到现有的系统中,并且对硬件和软件的需求增加最少。我们将图像混合和视图聚类与先前发布的技术进行比较,并讨论可能的实现优化及其权衡。
{"title":"Image Blending and View Clustering for Multi-Viewer Immersive Projection Environments","authors":"J. Marbach","doi":"10.1109/VR.2009.4810998","DOIUrl":"https://doi.org/10.1109/VR.2009.4810998","url":null,"abstract":"Investment into multi-wall Immersive Virtual Environments is often motivated by the potential for small groups of users to work collaboratively, yet most systems only allow for stereographic rendering from a single viewpoint. This paper discusses approaches for supporting copresent head-tracked users in an immersive projection environment, such as the CAVETM, without relying on additional projection and frame-multiplexing technology. The primary technique presented here is called Image Blending and consists of rendering independent views for each head-tracked user to an off-screen buffer and blending the images into a final composite view using view-vector incidence angles as weighting factors. Additionally, users whose view-vectors intersect a projection screen at similar locations are grouped into a view-cluster. Clustered user views are rendered from the average head position and orientation of all users in that cluster. The clustering approach minimizes users' exposure to undesirable display artifacts such as inverted stereo pairs and nonlinear object projections by distributing projection error over all tracked viewers. These techniques have the added advantage that they can be easily integrated into existing systems with minimally increased hardware and software requirements. We compare Image Blending and View Clustering with previously published techniques and discuss possible implementation optimizations and their tradeoffs.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121370642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
DiVE into Alcohol: A Biochemical Immersive Experience 潜入酒精:生化沉浸式体验
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811055
Marcel Yang, D. McMullen, R. Schwartz-Bloom, R. Brady
We present DiVE into Alcohol, a virtual reality (VR) program that can be used in chemistry education at the high school and college level, both as an immersive experience, or as a web-based program. The program is presented in the context of an engaging topic--the oxidation of alcohol based on genetic differences in metabolism within the liver cell. The user follows alcohol molecules through the body to the liver and into the enzyme active site where the alcohol is oxidized. A gaming format allows the user to choose molecules and orient them in 3D space to enable the oxidation reaction. Interactivity also includes choices of different forms of the enzyme based on the genetically-coded structure and rates of oxidation that lead to intoxication vs sickness. DiVE into Alcohol was constructed with the use of a variety of software that provide enzyme structure (¿ pdb¿ files, MolProbity, 3D Kinemage), modeling (Autodesk Maya¿), and VR technology (3DVIA VirTools¿) .
我们介绍了DiVE into Alcohol,这是一个虚拟现实(VR)程序,可以用于高中和大学的化学教育,既可以作为身临其境的体验,也可以作为基于网络的程序。该计划是在一个引人入胜的话题的背景下提出的——基于肝细胞内代谢的遗传差异的酒精氧化。使用者跟随酒精分子穿过身体到达肝脏,进入酒精被氧化的酶活性部位。游戏格式允许用户选择分子并在3D空间中定位它们以实现氧化反应。互动性还包括根据基因编码结构和导致中毒或疾病的氧化速率选择不同形式的酶。DiVE into Alcohol是使用各种软件构建的,这些软件提供酶结构(¿pdb¿files, MolProbity, 3D Kinemage),建模(Autodesk Maya¿)和VR技术(3DVIA VirTools¿)。
{"title":"DiVE into Alcohol: A Biochemical Immersive Experience","authors":"Marcel Yang, D. McMullen, R. Schwartz-Bloom, R. Brady","doi":"10.1109/VR.2009.4811055","DOIUrl":"https://doi.org/10.1109/VR.2009.4811055","url":null,"abstract":"We present DiVE into Alcohol, a virtual reality (VR) program that can be used in chemistry education at the high school and college level, both as an immersive experience, or as a web-based program. The program is presented in the context of an engaging topic--the oxidation of alcohol based on genetic differences in metabolism within the liver cell. The user follows alcohol molecules through the body to the liver and into the enzyme active site where the alcohol is oxidized. A gaming format allows the user to choose molecules and orient them in 3D space to enable the oxidation reaction. Interactivity also includes choices of different forms of the enzyme based on the genetically-coded structure and rates of oxidation that lead to intoxication vs sickness. DiVE into Alcohol was constructed with the use of a variety of software that provide enzyme structure (¿ pdb¿ files, MolProbity, 3D Kinemage), modeling (Autodesk Maya¿), and VR technology (3DVIA VirTools¿) .","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117337187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Scalable Vision-based Gesture Interaction for Cluster-driven High Resolution Display Systems 集群驱动的高分辨率显示系统中基于视觉的可伸缩手势交互
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811030
Xun Luo, R. Kenyon
We present a coordinated ensemble of scalable computing techniques to accelerate a number of key tasks needed for vision-based gesture interaction, by using the cluster driving a large display system. A hybrid strategy that partitions the scanning task of a frame image by both region and scale is proposed. Based on this hybrid strategy, a novel data structure called a scanning tree is designed to organize the computing nodes. The level of effectiveness of the proposed solution was tested by incorporating it into a gesture interface controlling a ultra-high-resolution tiled display wall.
我们提出了一个协调的可扩展计算技术集合,通过使用集群驱动大型显示系统来加速基于视觉的手势交互所需的一些关键任务。提出了一种按区域和尺度划分帧图像扫描任务的混合策略。基于这种混合策略,设计了一种新的数据结构——扫描树来组织计算节点。通过将该解决方案整合到控制超高分辨率平铺显示墙的手势界面中,对其有效性进行了测试。
{"title":"Scalable Vision-based Gesture Interaction for Cluster-driven High Resolution Display Systems","authors":"Xun Luo, R. Kenyon","doi":"10.1109/VR.2009.4811030","DOIUrl":"https://doi.org/10.1109/VR.2009.4811030","url":null,"abstract":"We present a coordinated ensemble of scalable computing techniques to accelerate a number of key tasks needed for vision-based gesture interaction, by using the cluster driving a large display system. A hybrid strategy that partitions the scanning task of a frame image by both region and scale is proposed. Based on this hybrid strategy, a novel data structure called a scanning tree is designed to organize the computing nodes. The level of effectiveness of the proposed solution was tested by incorporating it into a gesture interface controlling a ultra-high-resolution tiled display wall.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133626018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Natural Eye Motion Synthesis by Modeling Gaze-Head Coupling 基于注视-头部耦合建模的自然眼动合成
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811014
Xiaohan Ma, Z. Deng
Due to the intrinsic subtlety and dynamics of eye movements, automated generation of natural and engaging eye motion has been a challenging task for decades. In this paper we present an effective technique to synthesize natural eye gazes given a head motion sequence as input, by statistically modeling the innate coupling between gazes and head movements. We first simultaneously recorded head motions and eye gazes of human subjects, using a novel hybrid data acquisition solution consisting of an optical motion capture system and off-the-shelf video cameras. Then, we statistically learn gaze-head coupling patterns using a dynamic coupled component analysis model. Finally, given a head motion sequence as input, we can synthesize its corresponding natural eye gazes based on the constructed gaze-head coupling model. Through comparative user studies and evaluations, we found that comparing with the state of the art algorithms in eye motion synthesis, our approach is more effective to generate natural gazes correlated with given head motions. We also showed the effectiveness of our approach for gaze simulation in two-party conversations.
由于眼球运动固有的微妙性和动态性,几十年来,自动生成自然而迷人的眼球运动一直是一项具有挑战性的任务。本文提出了一种以头部运动序列为输入,通过对注视与头部运动之间的固有耦合进行统计建模,来合成人眼自然注视的有效方法。我们首先使用一种由光学运动捕捉系统和现成摄像机组成的新型混合数据采集解决方案,同时记录人类受试者的头部运动和眼睛注视。然后,利用动态耦合分量分析模型统计学习凝视-头部耦合模式。最后,以头部运动序列为输入,基于构建的注视-头部耦合模型,合成其对应的自然注视。通过对比用户研究和评估,我们发现与眼动合成的最新算法相比,我们的方法更有效地生成与给定头部运动相关的自然凝视。我们还展示了我们的方法在双方对话中进行凝视模拟的有效性。
{"title":"Natural Eye Motion Synthesis by Modeling Gaze-Head Coupling","authors":"Xiaohan Ma, Z. Deng","doi":"10.1109/VR.2009.4811014","DOIUrl":"https://doi.org/10.1109/VR.2009.4811014","url":null,"abstract":"Due to the intrinsic subtlety and dynamics of eye movements, automated generation of natural and engaging eye motion has been a challenging task for decades. In this paper we present an effective technique to synthesize natural eye gazes given a head motion sequence as input, by statistically modeling the innate coupling between gazes and head movements. We first simultaneously recorded head motions and eye gazes of human subjects, using a novel hybrid data acquisition solution consisting of an optical motion capture system and off-the-shelf video cameras. Then, we statistically learn gaze-head coupling patterns using a dynamic coupled component analysis model. Finally, given a head motion sequence as input, we can synthesize its corresponding natural eye gazes based on the constructed gaze-head coupling model. Through comparative user studies and evaluations, we found that comparing with the state of the art algorithms in eye motion synthesis, our approach is more effective to generate natural gazes correlated with given head motions. We also showed the effectiveness of our approach for gaze simulation in two-party conversations.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115542430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Real-time Volumetric Reconstruction and Tracking of Hands and Face as a User Interface for Virtual Environments 手和脸的实时体积重建和跟踪作为虚拟环境的用户界面
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811035
C. John, Ulrich Schwanecke, H. Regenbrecht
Enhancing desk-based computer environments with virtual reality technology requires natural interaction support, in particular head and hand tracking. Todays motion capture systems instrument users with intrusive hardware like optical markers or data gloves which limit the perceived realism of interactions with a virtual environment and constrain the free moving space of operators. Our work therefore focuses on the development of fault-tolerant techniques for fast, non-contact 3D hand motion capture, targeted to the application in standard office environments. This paper presents a table-top setup which utilizes vision based volumetric reconstruction and tracking of skin colored objects to integrate the users hands and face into virtual environments. The system is based on off-the-shelf hardware components and satisfies real-time constraints.
用虚拟现实技术增强基于桌面的计算机环境需要自然交互支持,特别是头部和手部跟踪。今天的动作捕捉系统为用户提供了侵入性硬件,如光学标记或数据手套,这些硬件限制了与虚拟环境交互的感知真实感,并限制了操作员的自由移动空间。因此,我们的工作重点是开发用于快速、非接触式3D手部动作捕捉的容错技术,目标是在标准办公环境中应用。本文提出了一种桌面装置,利用基于视觉的体积重建和皮肤颜色物体的跟踪,将用户的手和脸融入虚拟环境。该系统基于现成的硬件组件,满足实时性要求。
{"title":"Real-time Volumetric Reconstruction and Tracking of Hands and Face as a User Interface for Virtual Environments","authors":"C. John, Ulrich Schwanecke, H. Regenbrecht","doi":"10.1109/VR.2009.4811035","DOIUrl":"https://doi.org/10.1109/VR.2009.4811035","url":null,"abstract":"Enhancing desk-based computer environments with virtual reality technology requires natural interaction support, in particular head and hand tracking. Todays motion capture systems instrument users with intrusive hardware like optical markers or data gloves which limit the perceived realism of interactions with a virtual environment and constrain the free moving space of operators. Our work therefore focuses on the development of fault-tolerant techniques for fast, non-contact 3D hand motion capture, targeted to the application in standard office environments. This paper presents a table-top setup which utilizes vision based volumetric reconstruction and tracking of skin colored objects to integrate the users hands and face into virtual environments. The system is based on off-the-shelf hardware components and satisfies real-time constraints.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117290207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Improving Spatial Perception for Augmented Reality X-Ray Vision 增强现实x射线视觉的空间感知改进
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811002
Ben Avery, C. Sandor, B. Thomas
Augmented reality x-ray vision allows users to see through walls and view real occluded objects and locations. We present an augmented reality x-ray vision system that employs multiple view modes to support new visualizations that provide depth cues and spatial awareness to users. The edge overlay visualization provides depth cues to make hidden objects appear to be behind walls, rather than floating in front of them. Utilizing this edge overlay, the tunnel cut-out visualization provides details about occluding layers between the user and remote location. Inherent limitations of these visualizations are addressed by our addition of view modes allowing the user to obtain additional detail by zooming in, or an overview of the environment via an overhead exocentric view.
增强现实x射线视觉允许用户透过墙壁看到真实的被遮挡的物体和位置。我们提出了一种增强现实x射线视觉系统,该系统采用多种视图模式来支持新的可视化,为用户提供深度线索和空间感知。边缘叠加可视化提供了深度线索,使隐藏的物体看起来在墙后面,而不是漂浮在墙前面。利用这种边缘覆盖,隧道切割可视化提供了用户和远程位置之间遮挡层的详细信息。这些可视化的固有局限性通过我们添加的视图模式得到了解决,允许用户通过放大获得额外的细节,或者通过头顶的外心视图获得环境的概述。
{"title":"Improving Spatial Perception for Augmented Reality X-Ray Vision","authors":"Ben Avery, C. Sandor, B. Thomas","doi":"10.1109/VR.2009.4811002","DOIUrl":"https://doi.org/10.1109/VR.2009.4811002","url":null,"abstract":"Augmented reality x-ray vision allows users to see through walls and view real occluded objects and locations. We present an augmented reality x-ray vision system that employs multiple view modes to support new visualizations that provide depth cues and spatial awareness to users. The edge overlay visualization provides depth cues to make hidden objects appear to be behind walls, rather than floating in front of them. Utilizing this edge overlay, the tunnel cut-out visualization provides details about occluding layers between the user and remote location. Inherent limitations of these visualizations are addressed by our addition of view modes allowing the user to obtain additional detail by zooming in, or an overview of the environment via an overhead exocentric view.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129585751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 130
A Software Architecture for Sharing Distributed Virtual Worlds 共享分布式虚拟世界的软件体系结构
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811050
F. Drolet, M. Mokhtari, François Bernier, D. Laurendeau
This paper presents a generic software architecture developed to allow users located at different physical locations to share the same virtual environment and to interact with each other and the environment in a coherent and transparent manner.
本文提出了一种通用的软件架构,该架构允许位于不同物理位置的用户共享相同的虚拟环境,并以连贯和透明的方式与彼此和环境进行交互。
{"title":"A Software Architecture for Sharing Distributed Virtual Worlds","authors":"F. Drolet, M. Mokhtari, François Bernier, D. Laurendeau","doi":"10.1109/VR.2009.4811050","DOIUrl":"https://doi.org/10.1109/VR.2009.4811050","url":null,"abstract":"This paper presents a generic software architecture developed to allow users located at different physical locations to share the same virtual environment and to interact with each other and the environment in a coherent and transparent manner.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127828605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
2009 IEEE Virtual Reality Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1