首页 > 最新文献

2011 IEEE Virtual Reality Conference最新文献

英文 中文
Mobile Augmented Reality using scalable recognition and tracking 使用可扩展识别和跟踪的移动增强现实
Pub Date : 2011-03-19 DOI: 10.1109/VR.2011.5759473
Jae-Deok Ha, Jinki Jung, Byungok Han, Kyusung Cho, H. Yang
In this paper, a new mobile Augmented Reality (AR) framework which is scalable to the number of objects being augmented is proposed. The scalability is achieved by a visual word recognition module on the remote server and a mobile phone which detects, tracks, and augments target objects with the received information from the server. The server and the mobile phone are connected through a conventional Wi-Fi. In the experiment, it takes 0.2 seconds for the cold start of an AR service initiation on a 10k object database, which is fairly acceptable in a real-world AR application.
本文提出了一种新的移动增强现实(AR)框架,该框架可根据被增强对象的数量进行扩展。可扩展性是通过远程服务器上的视觉单词识别模块和移动电话来实现的,移动电话使用从服务器接收的信息来检测、跟踪和增强目标对象。服务器和手机通过传统的Wi-Fi连接。在实验中,在10k对象数据库上冷启动AR服务需要0.2秒,这在真实的AR应用程序中是相当可以接受的。
{"title":"Mobile Augmented Reality using scalable recognition and tracking","authors":"Jae-Deok Ha, Jinki Jung, Byungok Han, Kyusung Cho, H. Yang","doi":"10.1109/VR.2011.5759473","DOIUrl":"https://doi.org/10.1109/VR.2011.5759473","url":null,"abstract":"In this paper, a new mobile Augmented Reality (AR) framework which is scalable to the number of objects being augmented is proposed. The scalability is achieved by a visual word recognition module on the remote server and a mobile phone which detects, tracks, and augments target objects with the received information from the server. The server and the mobile phone are connected through a conventional Wi-Fi. In the experiment, it takes 0.2 seconds for the cold start of an AR service initiation on a 10k object database, which is fairly acceptable in a real-world AR application.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"459 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133400954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Virtual game show host — Dr. Chestr 虚拟游戏节目主持人,切斯特博士
Pub Date : 2011-03-19 DOI: 10.1109/VR.2011.5759486
R. Sakpal, D. Wilson
This paper describes the design, implementation and evaluation of an interactive virtual human Dr. Chestr: Computerized Host Encouraging Students to Review. Game show hosts exert a unique personality that becomes the trademark of their respective game shows. Our aim is to create virtual humans that can interact naturally and spontaneously using speech, emotions and gesture. Dr. Chestr is our virtual game show host that exhibits a personality designed to increase user engagement. Dr. Chestr is designed to test users with questions about the C++ programming language and allows the user to communicate using the most natural form of interaction, speech. We present the architecture and user evaluations of the Dr. Chestr Game Show.
本文介绍了一个鼓励学生复习的交互式虚拟人“切斯特博士:计算机主持人”的设计、实现和评价。游戏节目主持人展现出独特的个性,成为各自游戏节目的商标。我们的目标是创造虚拟人类,他们可以自然自发地使用语言、情感和手势进行互动。切斯特博士是我们的虚拟游戏节目主持人,展示了一个旨在增加用户参与度的个性。Dr. chester的设计目的是用有关c++编程语言的问题来测试用户,并允许用户使用最自然的交互形式——语音进行交流。我们介绍了切斯特博士游戏秀的架构和用户评价。
{"title":"Virtual game show host — Dr. Chestr","authors":"R. Sakpal, D. Wilson","doi":"10.1109/VR.2011.5759486","DOIUrl":"https://doi.org/10.1109/VR.2011.5759486","url":null,"abstract":"This paper describes the design, implementation and evaluation of an interactive virtual human Dr. Chestr: Computerized Host Encouraging Students to Review. Game show hosts exert a unique personality that becomes the trademark of their respective game shows. Our aim is to create virtual humans that can interact naturally and spontaneously using speech, emotions and gesture. Dr. Chestr is our virtual game show host that exhibits a personality designed to increase user engagement. Dr. Chestr is designed to test users with questions about the C++ programming language and allows the user to communicate using the most natural form of interaction, speech. We present the architecture and user evaluations of the Dr. Chestr Game Show.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123008462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of sensory feedback while interacting with graphical menus in virtual environments 在虚拟环境中与图形菜单交互时的感官反馈效果
Pub Date : 2011-03-19 DOI: 10.1109/VR.2011.5759467
Nguyen-Thong Dang, Vincent Perrot, D. Mestre
The present study investigates the effect of three types of sensory feedback (visual, auditory and passive haptic) in a context of two-handed interaction with graphical menus in virtual environments. Subjects controlled the position and orientation of a graphical menu using their non-dominant hand and interacted with menu items using their dominant index fingertip. An ISO 9241–9-based multi-tapping task and a sliding task were respectively used to evaluate subjects' performance in different feedback conditions. Adding passive haptic to visual feedback increased movement time and error rate, decreased throughput in the multi-tapping task, but outperformed visual only and visual-auditory feedback in the sliding task (in terms of movement time and number of times the contact between the finger and the pointer was lost). The results also showed that visual-auditory feedback, even if judged as useful by some subjects, decreased users' performance in the sliding task, as compared to visual-only feedback.
本研究调查了三种类型的感官反馈(视觉,听觉和被动触觉)在虚拟环境中与图形菜单的双手交互背景下的影响。受试者用非惯用手控制图形菜单的位置和方向,用惯用食指与菜单项交互。采用基于ISO 9241 - 9的多点敲击任务和滑动任务,分别评价被试在不同反馈条件下的表现。在视觉反馈的基础上增加被动触觉会增加运动时间和错误率,降低多点点击任务的吞吐量,但在滑动任务中优于视觉反馈和视觉听觉反馈(在运动时间和手指与指针之间失去接触的次数方面)。结果还表明,视觉-听觉反馈,即使被一些受试者认为是有用的,与视觉反馈相比,也会降低用户在滑动任务中的表现。
{"title":"Effects of sensory feedback while interacting with graphical menus in virtual environments","authors":"Nguyen-Thong Dang, Vincent Perrot, D. Mestre","doi":"10.1109/VR.2011.5759467","DOIUrl":"https://doi.org/10.1109/VR.2011.5759467","url":null,"abstract":"The present study investigates the effect of three types of sensory feedback (visual, auditory and passive haptic) in a context of two-handed interaction with graphical menus in virtual environments. Subjects controlled the position and orientation of a graphical menu using their non-dominant hand and interacted with menu items using their dominant index fingertip. An ISO 9241–9-based multi-tapping task and a sliding task were respectively used to evaluate subjects' performance in different feedback conditions. Adding passive haptic to visual feedback increased movement time and error rate, decreased throughput in the multi-tapping task, but outperformed visual only and visual-auditory feedback in the sliding task (in terms of movement time and number of times the contact between the finger and the pointer was lost). The results also showed that visual-auditory feedback, even if judged as useful by some subjects, decreased users' performance in the sliding task, as compared to visual-only feedback.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116795024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Recognition-driven 3D navigation in large-scale virtual environments 大规模虚拟环境中识别驱动的三维导航
Pub Date : 2011-03-19 DOI: 10.1109/VR.2011.5759439
Wei Guan, Suya You, U. Neumann
We present a recognition-driven navigation system for large-scale 3D virtual environments. The proposed system contains three parts, virtual environment reconstruction, feature database building and recognition-based navigation. The virtual environment is reconstructed automatically with LIDAR data and aerial images. The feature database is composed of image patches with features and registered location and orientation information. The database images are taken at different distances from the scenes with various viewing angles, and these images are then partitioned into smaller patches. When a user navigates the real world with a handheld camera, the captured image is used to estimate its location and orientation. These location and orientation information are also reflected in the virtual environment. With the proposed patch approach, the recognition is robust to large occlusions and can be done in real time. Experiments show that our proposed navigation system is efficient and well synchronized with real world navigation.
提出了一种面向大规模三维虚拟环境的识别驱动导航系统。该系统包括虚拟环境重构、特征库构建和基于识别的导航三个部分。利用激光雷达数据和航空图像自动重建虚拟环境。特征库由带有特征的图像补丁和注册的位置和方向信息组成。数据库图像以不同视角从不同距离拍摄,然后将这些图像分割成更小的块。当用户用手持相机在现实世界中导航时,捕获的图像用于估计其位置和方向。这些位置和方向信息也反映在虚拟环境中。该方法对大遮挡具有鲁棒性,可以实时识别。实验结果表明,所提出的导航系统具有较好的同步性能。
{"title":"Recognition-driven 3D navigation in large-scale virtual environments","authors":"Wei Guan, Suya You, U. Neumann","doi":"10.1109/VR.2011.5759439","DOIUrl":"https://doi.org/10.1109/VR.2011.5759439","url":null,"abstract":"We present a recognition-driven navigation system for large-scale 3D virtual environments. The proposed system contains three parts, virtual environment reconstruction, feature database building and recognition-based navigation. The virtual environment is reconstructed automatically with LIDAR data and aerial images. The feature database is composed of image patches with features and registered location and orientation information. The database images are taken at different distances from the scenes with various viewing angles, and these images are then partitioned into smaller patches. When a user navigates the real world with a handheld camera, the captured image is used to estimate its location and orientation. These location and orientation information are also reflected in the virtual environment. With the proposed patch approach, the recognition is robust to large occlusions and can be done in real time. Experiments show that our proposed navigation system is efficient and well synchronized with real world navigation.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115653722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Multi-sensorial field display: Presenting spatial distribution of airflow and odor 多感官现场显示:呈现气流和气味的空间分布
Pub Date : 2011-03-19 DOI: 10.1109/VR.2011.5759448
H. Matsukura, T. Nihei, H. Ishida
A new device has been developed for generating airflow field and odor-concentration distribution in a real environment for presenting to the user. This device is called a multi-sensorial field (MSF) display. When two fans are placed facing each other, the airflows generated by them collide with each other and are radially deflected on a plane perpendicular to the original airflow direction. By utilizing the deflected airflow, the MSF display can present the airflow blowing from the front to the user without placing fans in front of the user. The directivity of the airflow deflection can be controlled by placing nozzles on the fans to adjust the cross-sectional shape of the airflow jets coming from the fans. The MSF display can also generate odor-concentration distribution in a real environment by introducing odor vapors into the airflow generated by the fans. The user can freely move his/her head and sniff at various locations in the generated odor distribution. The results of preliminary sensory tests are presented to show the potential of the MSF display.
研制了一种在真实环境中产生气流场和气味浓度分布并呈现给用户的装置。这种设备被称为多传感器场(MSF)显示器。当两台风机相对放置时,它们产生的气流相互碰撞,并在垂直于原气流方向的平面上发生径向偏转。通过利用偏转气流,MSF显示器可以将从正面吹向用户的气流呈现出来,而无需在用户前方放置风扇。气流偏转的方向性可以通过在风扇上放置喷嘴来调节来自风扇的气流射流的截面形状来控制。MSF显示器还可以通过将气味蒸汽引入风扇产生的气流中,从而在真实环境中产生气味浓度分布。使用者可以自由移动他/她的头,在产生的气味分布的不同位置嗅。初步的感官测试结果显示了MSF显示的潜力。
{"title":"Multi-sensorial field display: Presenting spatial distribution of airflow and odor","authors":"H. Matsukura, T. Nihei, H. Ishida","doi":"10.1109/VR.2011.5759448","DOIUrl":"https://doi.org/10.1109/VR.2011.5759448","url":null,"abstract":"A new device has been developed for generating airflow field and odor-concentration distribution in a real environment for presenting to the user. This device is called a multi-sensorial field (MSF) display. When two fans are placed facing each other, the airflows generated by them collide with each other and are radially deflected on a plane perpendicular to the original airflow direction. By utilizing the deflected airflow, the MSF display can present the airflow blowing from the front to the user without placing fans in front of the user. The directivity of the airflow deflection can be controlled by placing nozzles on the fans to adjust the cross-sectional shape of the airflow jets coming from the fans. The MSF display can also generate odor-concentration distribution in a real environment by introducing odor vapors into the airflow generated by the fans. The user can freely move his/her head and sniff at various locations in the generated odor distribution. The results of preliminary sensory tests are presented to show the potential of the MSF display.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114825212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Full body haptic display for low-cost racing car driving simulators 用于低成本赛车驾驶模拟器的全身触觉显示
Pub Date : 2011-03-19 DOI: 10.1109/VR.2011.5759490
Adrian Steinemann, Sebastian Tschudi, A. Kunz
Motion platforms are advanced systems for driving simulators. Studies showed that these systems imitate the real driving behavior of cars very accurately. In low-cost driving simulators, most installations lack motion platforms and miss to simulate real motion forces. Their focus is on high-quality video and audio, or force feedback on steering wheels. We aim to substitute the real motion forces with low-cost actuators triggering the human extremities to create an extended immersion. By this, the quality of driving simulators without any motion platform can be increased. Our full body haptic display concept for low-cost racing car simulators is based on air cushion and pull mechanisms to support longitudinal and lateral forces addressing the human's mechanoreceptive and proprioceptive senses. The concept is analyzed within a user study covering twenty-two participants.
运动平台是用于驾驶模拟器的先进系统。研究表明,这些系统非常准确地模仿了汽车的真实驾驶行为。在低成本的驾驶模拟器中,大多数装置缺乏运动平台,无法模拟真实的运动力。他们的重点是高质量的视频和音频,或者方向盘上的力反馈。我们的目标是用低成本的致动器来代替真实的运动力,从而触发人类的四肢来创造一个更长的沉浸感。这样可以提高没有运动平台的驾驶模拟器的质量。我们的低成本赛车模拟器的全身触觉显示概念是基于气垫和拉力机制来支持纵向和横向力,解决人类的机械感受和本体感觉。这个概念是在一项涵盖22名参与者的用户研究中分析的。
{"title":"Full body haptic display for low-cost racing car driving simulators","authors":"Adrian Steinemann, Sebastian Tschudi, A. Kunz","doi":"10.1109/VR.2011.5759490","DOIUrl":"https://doi.org/10.1109/VR.2011.5759490","url":null,"abstract":"Motion platforms are advanced systems for driving simulators. Studies showed that these systems imitate the real driving behavior of cars very accurately. In low-cost driving simulators, most installations lack motion platforms and miss to simulate real motion forces. Their focus is on high-quality video and audio, or force feedback on steering wheels. We aim to substitute the real motion forces with low-cost actuators triggering the human extremities to create an extended immersion. By this, the quality of driving simulators without any motion platform can be increased. Our full body haptic display concept for low-cost racing car simulators is based on air cushion and pull mechanisms to support longitudinal and lateral forces addressing the human's mechanoreceptive and proprioceptive senses. The concept is analyzed within a user study covering twenty-two participants.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131100466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Depth judgment tasks and environments in near-field augmented reality 近场增强现实中的深度判断任务和环境
Pub Date : 2011-03-19 DOI: 10.1109/VR.2011.5759488
Gurjot Singh, J. Swan, J. A. Jones, S. Ellis
In this poster abstract we describe an experiment that measured depth judgments in optical see-through augmented reality at near-field distances of 34 to 50 centimeters. The experiment compared two depth judgment tasks: perceptual matching, a closed-loop task, and blind reaching, a visually open-loop task. The experiment tested each of these tasks in both a real-world environment and an augmented reality environment, and used a between-subjects design that included 40 participants. The experiment found that matching judgments were very accurate in the real world, with errors on the order of millimeters and very little variance. In contrast, matching judgments in augmented reality showed a linear trend of increasing overestimation with increasing distance, with a mean overestimation of ∼ 1 cm. With reaching judgments participants underestimated ∼ 4.5 cm in both augmented reality and the real world. We also discovered and solved a calibration problem that arises at near-field distances.
在这张海报摘要中,我们描述了一个实验,在34到50厘米的近场距离测量光学透明增强现实的深度判断。实验比较了两种深度判断任务:知觉匹配(闭环任务)和盲取(视觉开环任务)。实验在现实环境和增强现实环境中测试了这些任务,并采用了包括40名参与者的中间受试者设计。实验发现,匹配判断在现实世界中非常准确,误差在毫米量级,差异很小。相比之下,增强现实中的匹配判断随着距离的增加呈现出高估增加的线性趋势,平均高估约为1 cm。在增强现实和现实世界中,参与者在做出判断时都低估了4.5厘米。我们还发现并解决了在近场距离出现的校准问题。
{"title":"Depth judgment tasks and environments in near-field augmented reality","authors":"Gurjot Singh, J. Swan, J. A. Jones, S. Ellis","doi":"10.1109/VR.2011.5759488","DOIUrl":"https://doi.org/10.1109/VR.2011.5759488","url":null,"abstract":"In this poster abstract we describe an experiment that measured depth judgments in optical see-through augmented reality at near-field distances of 34 to 50 centimeters. The experiment compared two depth judgment tasks: perceptual matching, a closed-loop task, and blind reaching, a visually open-loop task. The experiment tested each of these tasks in both a real-world environment and an augmented reality environment, and used a between-subjects design that included 40 participants. The experiment found that matching judgments were very accurate in the real world, with errors on the order of millimeters and very little variance. In contrast, matching judgments in augmented reality showed a linear trend of increasing overestimation with increasing distance, with a mean overestimation of ∼ 1 cm. With reaching judgments participants underestimated ∼ 4.5 cm in both augmented reality and the real world. We also discovered and solved a calibration problem that arises at near-field distances.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"44 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114126070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
FAAST: The Flexible Action and Articulated Skeleton Toolkit 快速:灵活的行动和铰接骨架工具包
Pub Date : 2011-03-19 DOI: 10.1109/VR.2011.5759491
Evan A. Suma, B. Lange, A. Rizzo, D. Krum, M. Bolas
The Flexible Action and Articulated Skeleton Toolkit (FAAST) is middleware to facilitate integration of full-body control with virtual reality applications and video games using OpenNI-compliant depth sensors (currently the PrimeSensor and the Microsoft Kinect). FAAST incorporates a VRPN server for streaming the user's skeleton joints over a network, which provides a convenient interface for custom virtual reality applications and games. This body pose information can be used for goals such as realistically puppeting a virtual avatar or controlling an on-screen mouse cursor. Additionally, the toolkit also provides a configurable input emulator that detects human actions and binds them to virtual mouse and keyboard commands, which are sent to the actively selected window. Thus, FAAST can enable natural interaction for existing off-the-shelf video games that were not explicitly developed to support input from motion sensors. The actions and input bindings are configurable at run-time, allowing the user to customize the controls and sensitivity to adjust for individual body types and preferences. In the future, we plan to substantially expand FAAST's action lexicon, provide support for recording and training custom gestures, and incorporate real-time head tracking using computer vision techniques.
灵活动作和铰接骨架工具包(FAAST)是一种中间件,可以使用openni兼容的深度传感器(目前是primessensor和微软Kinect),促进全身控制与虚拟现实应用和视频游戏的集成。FAAST集成了一个VRPN服务器,用于通过网络传输用户的骨骼关节,这为自定义虚拟现实应用程序和游戏提供了方便的界面。这种身体姿势信息可以用于逼真地操纵虚拟角色或控制屏幕上的鼠标光标等目标。此外,该工具包还提供了一个可配置的输入仿真器,用于检测人类操作并将其绑定到虚拟鼠标和键盘命令,这些命令将被发送到活动选择的窗口。因此,FAAST可以为现有的现成视频游戏提供自然交互,而这些游戏并没有明确地支持来自运动传感器的输入。操作和输入绑定在运行时是可配置的,允许用户自定义控件和灵敏度,以调整个人身体类型和首选项。在未来,我们计划大幅扩展FAAST的动作词典,为记录和训练自定义手势提供支持,并使用计算机视觉技术整合实时头部跟踪。
{"title":"FAAST: The Flexible Action and Articulated Skeleton Toolkit","authors":"Evan A. Suma, B. Lange, A. Rizzo, D. Krum, M. Bolas","doi":"10.1109/VR.2011.5759491","DOIUrl":"https://doi.org/10.1109/VR.2011.5759491","url":null,"abstract":"The Flexible Action and Articulated Skeleton Toolkit (FAAST) is middleware to facilitate integration of full-body control with virtual reality applications and video games using OpenNI-compliant depth sensors (currently the PrimeSensor and the Microsoft Kinect). FAAST incorporates a VRPN server for streaming the user's skeleton joints over a network, which provides a convenient interface for custom virtual reality applications and games. This body pose information can be used for goals such as realistically puppeting a virtual avatar or controlling an on-screen mouse cursor. Additionally, the toolkit also provides a configurable input emulator that detects human actions and binds them to virtual mouse and keyboard commands, which are sent to the actively selected window. Thus, FAAST can enable natural interaction for existing off-the-shelf video games that were not explicitly developed to support input from motion sensors. The actions and input bindings are configurable at run-time, allowing the user to customize the controls and sensitivity to adjust for individual body types and preferences. In the future, we plan to substantially expand FAAST's action lexicon, provide support for recording and training custom gestures, and incorporate real-time head tracking using computer vision techniques.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"13 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133042233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 238
Immersive ParaView: A community-based, immersive, universal scientific visualization application 沉浸式ParaView:一个基于社区的、沉浸式的、通用的科学可视化应用程序
Pub Date : 2011-03-19 DOI: 10.1109/VR.2011.5759487
Nikhil Shetty, Aashish Chaudhary, D. Coming, W. Sherman, P. O’leary, E. Whiting, S. Su
The availability of low-cost virtual reality (VR) systems coupled with a growing population of researchers accustomed to newer interface styles makes this a ripe time to help domain science researchers cross the bridge to utilizing immersive interfaces. The logical next step is for scientists, engineers, doctors, etc. to incorporate immersive visualization into their exploration and analysis workflows. However, from past experience, we know having access to equipment is not sufficient. There are also several software hurdles to overcome. Obstacles must be lowered to provide scientists, engineers, and medical professionals low-risk means of exploring technologies beyond their desktops.
低成本虚拟现实(VR)系统的可用性,加上越来越多的研究人员习惯于更新的界面风格,使得现在是帮助领域科学研究人员跨越桥梁,利用沉浸式界面的成熟时机。合乎逻辑的下一步是科学家,工程师,医生等将沉浸式可视化纳入他们的探索和分析工作流程。然而,从过去的经验来看,我们知道有设备是不够的。还有几个软件障碍需要克服。必须降低障碍,为科学家、工程师和医疗专业人员提供探索桌面之外技术的低风险手段。
{"title":"Immersive ParaView: A community-based, immersive, universal scientific visualization application","authors":"Nikhil Shetty, Aashish Chaudhary, D. Coming, W. Sherman, P. O’leary, E. Whiting, S. Su","doi":"10.1109/VR.2011.5759487","DOIUrl":"https://doi.org/10.1109/VR.2011.5759487","url":null,"abstract":"The availability of low-cost virtual reality (VR) systems coupled with a growing population of researchers accustomed to newer interface styles makes this a ripe time to help domain science researchers cross the bridge to utilizing immersive interfaces. The logical next step is for scientists, engineers, doctors, etc. to incorporate immersive visualization into their exploration and analysis workflows. However, from past experience, we know having access to equipment is not sufficient. There are also several software hurdles to overcome. Obstacles must be lowered to provide scientists, engineers, and medical professionals low-risk means of exploring technologies beyond their desktops.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"61 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134161872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
On accelerating a volume-based haptic feedback algorithm 加速一种基于体的触觉反馈算法
Pub Date : 2011-03-19 DOI: 10.1109/VR.2011.5759474
Rui Hu, K. Barner, K. Steiner
The importance of haptic feedback is recognized by an increasing number of researchers in the virtual reality field. Recently, a volume-based haptic feedback approach has emerged. The approach samples the intersection volume between objects through the three axes in 3D space to render an accurate interaction force among objects. This paper presents a method to reduce the complexity of the volume-based force feedback algorithm. The core of the proposed algorithm is sampling the intersection volume between two objects only once rather than three times. For the other two axes, a penetration pair reconstruction algorithm is developed to generate the required information from the sampled result. Experimental results demonstrate that the proposed approach can increase the frame rate of the volumetric haptic feedback algorithm by a factor of over two. The resulting force error is modest compared to the original volume-based haptic feedback. This proposed algorithm may also be applied to accelerate other volume-based applications, e.g. volume based force interaction between colliding deformable objects in virtual reality simulation. Moreover, the algorithm requires no pre-processing, and is thus well suited for simulations where object topology is constantly changing, i.e. cutting, melting or deforming processes.
触觉反馈的重要性已被越来越多的虚拟现实领域的研究者所认识。最近,一种基于体积的触觉反馈方法出现了。该方法通过三维空间中的三个轴对物体之间的相交体积进行采样,以准确地呈现物体之间的相互作用力。提出了一种降低基于体积的力反馈算法复杂度的方法。该算法的核心是对两个物体之间的相交体积进行一次采样,而不是三次采样。对于另外两个轴,开发了穿透对重构算法,从采样结果中生成所需信息。实验结果表明,该方法可将体积触觉反馈算法的帧率提高两倍以上。与原始的基于体积的触觉反馈相比,产生的力误差是适度的。该算法也可用于加速其他基于体的应用,例如虚拟现实仿真中基于体的碰撞变形对象之间的力相互作用。此外,该算法不需要预处理,因此非常适合对象拓扑不断变化的模拟,即切割,熔化或变形过程。
{"title":"On accelerating a volume-based haptic feedback algorithm","authors":"Rui Hu, K. Barner, K. Steiner","doi":"10.1109/VR.2011.5759474","DOIUrl":"https://doi.org/10.1109/VR.2011.5759474","url":null,"abstract":"The importance of haptic feedback is recognized by an increasing number of researchers in the virtual reality field. Recently, a volume-based haptic feedback approach has emerged. The approach samples the intersection volume between objects through the three axes in 3D space to render an accurate interaction force among objects. This paper presents a method to reduce the complexity of the volume-based force feedback algorithm. The core of the proposed algorithm is sampling the intersection volume between two objects only once rather than three times. For the other two axes, a penetration pair reconstruction algorithm is developed to generate the required information from the sampled result. Experimental results demonstrate that the proposed approach can increase the frame rate of the volumetric haptic feedback algorithm by a factor of over two. The resulting force error is modest compared to the original volume-based haptic feedback. This proposed algorithm may also be applied to accelerate other volume-based applications, e.g. volume based force interaction between colliding deformable objects in virtual reality simulation. Moreover, the algorithm requires no pre-processing, and is thus well suited for simulations where object topology is constantly changing, i.e. cutting, melting or deforming processes.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124897920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2011 IEEE Virtual Reality Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1