首页 > 最新文献

2014 IEEE Virtual Reality (VR)最新文献

英文 中文
Ubiquitous virtual reality ‘To-Go’ 无处不在的虚拟现实“外卖”
Pub Date : 2014-04-24 DOI: 10.1109/VR.2014.6802101
Aryabrata Basu, K. Johnsen
We propose to demonstrate a ubiquitous immersive virtual reality system that is highly scalable and accessible to a larger audience. With the advent of handheld and wearable devices, we have seen it gain considerable popularity among the common masses. We present a practical design of such a system that offers the core affordances of immersive virtual reality in a portable and untethered configuration. In addition, we have developed an extensive immersive virtual experience that involves engaging users visually and aurally. This is an effort towards integrating VR into the space and time of user workflows.
我们建议展示一个无处不在的沉浸式虚拟现实系统,该系统具有高度可扩展性,并且可供更多的受众使用。随着手持和可穿戴设备的出现,我们已经看到它在普通大众中获得了相当大的普及。我们提出了这样一个系统的实际设计,在便携式和不受束缚的配置中提供沉浸式虚拟现实的核心功能。此外,我们还开发了一种广泛的沉浸式虚拟体验,包括在视觉和听觉上吸引用户。这是一种将VR整合到用户工作流程的空间和时间中的努力。
{"title":"Ubiquitous virtual reality ‘To-Go’","authors":"Aryabrata Basu, K. Johnsen","doi":"10.1109/VR.2014.6802101","DOIUrl":"https://doi.org/10.1109/VR.2014.6802101","url":null,"abstract":"We propose to demonstrate a ubiquitous immersive virtual reality system that is highly scalable and accessible to a larger audience. With the advent of handheld and wearable devices, we have seen it gain considerable popularity among the common masses. We present a practical design of such a system that offers the core affordances of immersive virtual reality in a portable and untethered configuration. In addition, we have developed an extensive immersive virtual experience that involves engaging users visually and aurally. This is an effort towards integrating VR into the space and time of user workflows.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128321002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Application of Hanger Reflex to wrist and waist 衣架反射在腕腰上的应用
Pub Date : 2014-04-24 DOI: 10.1109/VR.2014.6802111
Takuto Nakamura, Narihiro Nishimura, Michi Sato, H. Kajimoto
When a wire hanger is placed sideways on the head, and the temporal region is sandwiched by the hanger, the head rotates unexpectedly. This phenomenon has been named the “Hanger Reflex”. Although it is a simple method for producing pseudoforce sensation, the use of the wire hanger in this way has up until now been limited in posistion to the head. Here we report a new finding that when a wrist or waist is equipped with a device of a larger circumferance the arm or the body rotates involuntarily. This fact suggests that the Hanger Reflex principle might be applicable to parts of the body other than the head, leading to the possible compact whole-body force display. This paper documents the development and testing of the devices and, suggesting stable presentation of the rotational force.
当钢丝衣架横置于头部,颞区夹在衣架中间时,头部发生意外旋转。这种现象被称为“衣架反射”。虽然这是一种产生假力感觉的简单方法,但到目前为止,以这种方式使用钢丝架的位置仅限于头部。在这里,我们报告了一个新的发现,当手腕或腰部配备了一个更大周长的设备,手臂或身体不自主地旋转。这一事实表明,衣架反射原理可能适用于头部以外的身体部位,从而可能实现紧凑的全身力显示。本文记录了该装置的开发和测试,并提出了稳定的旋转力表示。
{"title":"Application of Hanger Reflex to wrist and waist","authors":"Takuto Nakamura, Narihiro Nishimura, Michi Sato, H. Kajimoto","doi":"10.1109/VR.2014.6802111","DOIUrl":"https://doi.org/10.1109/VR.2014.6802111","url":null,"abstract":"When a wire hanger is placed sideways on the head, and the temporal region is sandwiched by the hanger, the head rotates unexpectedly. This phenomenon has been named the “Hanger Reflex”. Although it is a simple method for producing pseudoforce sensation, the use of the wire hanger in this way has up until now been limited in posistion to the head. Here we report a new finding that when a wrist or waist is equipped with a device of a larger circumferance the arm or the body rotates involuntarily. This fact suggests that the Hanger Reflex principle might be applicable to parts of the body other than the head, leading to the possible compact whole-body force display. This paper documents the development and testing of the devices and, suggesting stable presentation of the rotational force.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133460673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Baseline SPAAM calibration accuracy and precision in the absence of human postural sway error 基线SPAAM校准精度和精度在没有人的姿势摆动误差
Pub Date : 2014-04-24 DOI: 10.1109/VR.2014.6802070
Kenneth R. Moser, Magnus Axholt, J. Swan
We conducted an experiment in an attempt to generate baseline accuracy and precision values for optical see-through (OST) head mounted display (HMD) calibration without the inclusion of human postural sway error. This preliminary work will act as a control condition for future studies into postural error reduction. An experimental apparatus was constructed to allow performance of a SPAAM calibration using 25 alignments taken using one of three distance distribution patterns: static, sequential, and magic square. The accuracy of the calibrations were determined by calculating the extrinsic X, Y, Z translation values from the resulting projection matrix. The standard deviation for each translation component was also calculated. The results show that the magic square distribution resulted in the most accurate parameter estimation and also resulted in the smallest standard deviation for each extrinsic translation component.
我们进行了一项实验,试图在不包含人体姿势摇摆误差的情况下,为光学透明(OST)头戴式显示器(HMD)校准生成基线精度和精度值。这项初步工作将作为未来研究减少姿势误差的控制条件。构建了一个实验装置,以允许使用使用三种距离分布模式之一进行25次校准的SPAAM校准:静态,顺序和魔方。通过计算得到的投影矩阵的外部X, Y, Z平移值来确定校准的精度。还计算了每个平移分量的标准差。结果表明,幻方分布能得到最准确的参数估计,同时也能得到各外来平移分量的最小标准差。
{"title":"Baseline SPAAM calibration accuracy and precision in the absence of human postural sway error","authors":"Kenneth R. Moser, Magnus Axholt, J. Swan","doi":"10.1109/VR.2014.6802070","DOIUrl":"https://doi.org/10.1109/VR.2014.6802070","url":null,"abstract":"We conducted an experiment in an attempt to generate baseline accuracy and precision values for optical see-through (OST) head mounted display (HMD) calibration without the inclusion of human postural sway error. This preliminary work will act as a control condition for future studies into postural error reduction. An experimental apparatus was constructed to allow performance of a SPAAM calibration using 25 alignments taken using one of three distance distribution patterns: static, sequential, and magic square. The accuracy of the calibrations were determined by calculating the extrinsic X, Y, Z translation values from the resulting projection matrix. The standard deviation for each translation component was also calculated. The results show that the magic square distribution resulted in the most accurate parameter estimation and also resulted in the smallest standard deviation for each extrinsic translation component.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129717650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Omegalib: A multi-view application framework for hybrid reality display environments Omegalib:用于混合现实显示环境的多视图应用程序框架
Pub Date : 2014-04-24 DOI: 10.1109/VR.2014.6802043
Alessandro Febretti, Arthur Nishimoto, V. Mateevitsi, L. Renambot, Andrew E. Johnson, J. Leigh
In the domain of large-scale visualization instruments, hybrid reality environments (HREs) are a recent innovation that combines the best-in-class capabilities of immersive environments, with the best-in-class capabilities of ultra-high-resolution display walls. HREs create a seamless 2D/3D environment that supports both information-rich analysis as well as virtual reality simulation exploration at a resolution matching human visual acuity. Co-located research groups in HREs tend to work on a variety of tasks during a research session (sometimes in parallel), and these tasks require 2D data views, 3D views, linking between them and the ability to bring in (or hide) data quickly as needed. In this paper we present Omegalib, a software framework that facilitates application development on HREs. Omegalib is designed to support dynamic reconfigurability of the display environment, so that areas of the display can be interactively allocated to 2D or 3D workspaces as needed. Compared to existing frameworks and toolkits, Omegalib makes it possible to have multiple immersive applications running on a cluster-controlled display system, have different input sources dynamically routed to applications, and have rendering results optionally redirected to a distributed compositing manager. Omegalib supports pluggable front-ends, to simplify the integration of third-party libraries like OpenGL, OpenSceneGraph, and the Visualization Toolkit (VTK). We present examples of applications developed with Omegalib for the 74-megapixel, 72-tile CAVE2™ system, and show how a Hybrid Reality Environment proved effective in supporting work for a co-located research group in the environmental sciences.
在大型可视化仪器领域,混合现实环境(HREs)是最近的一项创新,它结合了一流的沉浸式环境功能和一流的超高分辨率显示墙功能。HREs创建了一个无缝的2D/3D环境,既支持信息丰富的分析,也支持与人类视觉灵敏度相匹配的分辨率的虚拟现实模拟探索。HREs中的协同研究小组往往在一个研究会议期间(有时是并行的)处理各种任务,这些任务需要2D数据视图,3D视图,它们之间的链接以及根据需要快速引入(或隐藏)数据的能力。在本文中,我们介绍了Omegalib,这是一个促进HREs应用程序开发的软件框架。Omegalib旨在支持显示环境的动态可重构性,因此显示区域可以根据需要交互式地分配到2D或3D工作区。与现有的框架和工具包相比,Omegalib可以在集群控制的显示系统上运行多个沉浸式应用程序,将不同的输入源动态路由到应用程序,并将呈现结果可选地重定向到分布式合成管理器。Omegalib支持可插拔的前端,以简化第三方库的集成,如OpenGL、OpenSceneGraph和可视化工具包(VTK)。我们展示了使用Omegalib为7400万像素、72块CAVE2™系统开发的应用程序示例,并展示了混合现实环境如何有效地支持环境科学领域的协同研究小组的工作。
{"title":"Omegalib: A multi-view application framework for hybrid reality display environments","authors":"Alessandro Febretti, Arthur Nishimoto, V. Mateevitsi, L. Renambot, Andrew E. Johnson, J. Leigh","doi":"10.1109/VR.2014.6802043","DOIUrl":"https://doi.org/10.1109/VR.2014.6802043","url":null,"abstract":"In the domain of large-scale visualization instruments, hybrid reality environments (HREs) are a recent innovation that combines the best-in-class capabilities of immersive environments, with the best-in-class capabilities of ultra-high-resolution display walls. HREs create a seamless 2D/3D environment that supports both information-rich analysis as well as virtual reality simulation exploration at a resolution matching human visual acuity. Co-located research groups in HREs tend to work on a variety of tasks during a research session (sometimes in parallel), and these tasks require 2D data views, 3D views, linking between them and the ability to bring in (or hide) data quickly as needed. In this paper we present Omegalib, a software framework that facilitates application development on HREs. Omegalib is designed to support dynamic reconfigurability of the display environment, so that areas of the display can be interactively allocated to 2D or 3D workspaces as needed. Compared to existing frameworks and toolkits, Omegalib makes it possible to have multiple immersive applications running on a cluster-controlled display system, have different input sources dynamically routed to applications, and have rendering results optionally redirected to a distributed compositing manager. Omegalib supports pluggable front-ends, to simplify the integration of third-party libraries like OpenGL, OpenSceneGraph, and the Visualization Toolkit (VTK). We present examples of applications developed with Omegalib for the 74-megapixel, 72-tile CAVE2™ system, and show how a Hybrid Reality Environment proved effective in supporting work for a co-located research group in the environmental sciences.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132581994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
An enhanced steering algorithm for redirected walking in virtual environments 一种用于虚拟环境中重定向行走的增强转向算法
Pub Date : 2014-04-24 DOI: 10.1109/VR.2014.6802053
Mahdi Azmandian, Rhys Yahata, M. Bolas, Evan A. Suma
Redirected walking techniques enable natural locomotion through immersive virtual environments that are considerably larger than the available real world walking space. However, the most effective strategy for steering the user remains an open question, as most previously presented algorithms simply redirect toward the center of the physical space. In this work, we present a theoretical framework that plans a walking path through a virtual environment and calculates the parameters for combining translation, rotation, and curvature gains such that the user can traverse a series of defined waypoints efficiently based on a utility function. This function minimizes the number of overt reorientations to avoid introducing potential breaks in presence. A notable advantage of this approach is that it leverages knowledge of the layout of both the physical and virtual environments to enhance the steering strategy.
重定向行走技术可以通过比现实世界行走空间大得多的沉浸式虚拟环境实现自然运动。然而,引导用户的最有效策略仍然是一个悬而未决的问题,因为大多数先前提出的算法只是将用户重定向到物理空间的中心。在这项工作中,我们提出了一个理论框架,该框架计划通过虚拟环境的行走路径,并计算结合平移,旋转和曲率增益的参数,以便用户可以基于效用函数有效地遍历一系列定义的路径点。该函数最大限度地减少了明显的重新定向的数量,以避免引入潜在的中断。这种方法的一个显著优点是,它利用物理和虚拟环境的布局知识来增强转向策略。
{"title":"An enhanced steering algorithm for redirected walking in virtual environments","authors":"Mahdi Azmandian, Rhys Yahata, M. Bolas, Evan A. Suma","doi":"10.1109/VR.2014.6802053","DOIUrl":"https://doi.org/10.1109/VR.2014.6802053","url":null,"abstract":"Redirected walking techniques enable natural locomotion through immersive virtual environments that are considerably larger than the available real world walking space. However, the most effective strategy for steering the user remains an open question, as most previously presented algorithms simply redirect toward the center of the physical space. In this work, we present a theoretical framework that plans a walking path through a virtual environment and calculates the parameters for combining translation, rotation, and curvature gains such that the user can traverse a series of defined waypoints efficiently based on a utility function. This function minimizes the number of overt reorientations to avoid introducing potential breaks in presence. A notable advantage of this approach is that it leverages knowledge of the layout of both the physical and virtual environments to enhance the steering strategy.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114289438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
A demonstration of tablet-based interaction panels for immersive environments 为沉浸式环境演示基于平板电脑的交互面板
Pub Date : 2014-04-24 DOI: 10.1109/VR.2014.6802108
D. Krum, Thai-Binh Phan, Lauren Cairco, Peter Wang, M. Bolas
Our demo deals with the need in immersive virtual reality for devices that support expressive and adaptive interaction in a low-cost, eyes-free manner. Leveraging rapid prototyping techniques for fabrication, we have developed a variety of panels that can be overlaid on multi-touch tablets and smartphones. The panels are coupled with an app running on the multi-touch device that exchanges commands and state information over a wireless network with the virtual reality application. Sculpted features of the panels provide tactile disambiguation of control widgets and an onscreen heads-up display provides interaction state information. A variety of interaction mappings can be provided through software to support several classes of interaction techniques in virtual environments. We foresee additional uses for applications where eyes-free use and adaptable interaction interfaces can be beneficial.
我们的演示处理了对沉浸式虚拟现实设备的需求,这些设备支持以低成本、不需要眼睛的方式进行表达和自适应交互。利用快速原型制造技术,我们开发了各种各样的面板,可以覆盖在多点触控平板电脑和智能手机上。这些面板与运行在多点触控设备上的应用程序相结合,该应用程序通过无线网络与虚拟现实应用程序交换命令和状态信息。面板的雕刻功能提供了控制部件的触觉消除歧义,屏幕上的平视显示器提供了交互状态信息。可以通过软件提供各种交互映射,以支持虚拟环境中的几种交互技术。我们预见到应用程序的其他用途,其中无需眼睛使用和可适应的交互界面可能是有益的。
{"title":"A demonstration of tablet-based interaction panels for immersive environments","authors":"D. Krum, Thai-Binh Phan, Lauren Cairco, Peter Wang, M. Bolas","doi":"10.1109/VR.2014.6802108","DOIUrl":"https://doi.org/10.1109/VR.2014.6802108","url":null,"abstract":"Our demo deals with the need in immersive virtual reality for devices that support expressive and adaptive interaction in a low-cost, eyes-free manner. Leveraging rapid prototyping techniques for fabrication, we have developed a variety of panels that can be overlaid on multi-touch tablets and smartphones. The panels are coupled with an app running on the multi-touch device that exchanges commands and state information over a wireless network with the virtual reality application. Sculpted features of the panels provide tactile disambiguation of control widgets and an onscreen heads-up display provides interaction state information. A variety of interaction mappings can be provided through software to support several classes of interaction techniques in virtual environments. We foresee additional uses for applications where eyes-free use and adaptable interaction interfaces can be beneficial.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125177531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
The Virtual World Framework: Collaborative virtual environments on the web 虚拟世界框架:网络上的协作虚拟环境
Pub Date : 2014-04-24 DOI: 10.1109/VR.2014.6802103
E. Burns, David Easter, Rob Chadwick, David A. Smith, Carl Rosengrant
Software distribution and installation is a logistical issue for large enterprises. Web applications are often a good solution because users can instantly receive application updates on any device without needing special permissions to install them on their hardware. Until recently, it was not possible to create 3D multiuser virtual environment-based web applications that didn't require installing a browser plugin. However, recent web standards have made it possible. We present the Virtual World Framework (VWF), a software framework for creating 3D multiuser web applications. We are using VWF to create applications for team training and collaboration. VWF can be downloaded at http://virtual.wf.
软件分发和安装是大型企业的后勤问题。Web应用程序通常是一个很好的解决方案,因为用户可以在任何设备上立即接收应用程序更新,而不需要在硬件上安装它们的特殊权限。直到最近,不需要安装浏览器插件就可以创建基于3D多用户虚拟环境的web应用程序是不可能的。然而,最近的网络标准使它成为可能。我们提出了虚拟世界框架(VWF),一个用于创建3D多用户web应用程序的软件框架。我们正在使用VWF创建团队培训和协作的应用程序。VWF可从http://virtual.wf下载。
{"title":"The Virtual World Framework: Collaborative virtual environments on the web","authors":"E. Burns, David Easter, Rob Chadwick, David A. Smith, Carl Rosengrant","doi":"10.1109/VR.2014.6802103","DOIUrl":"https://doi.org/10.1109/VR.2014.6802103","url":null,"abstract":"Software distribution and installation is a logistical issue for large enterprises. Web applications are often a good solution because users can instantly receive application updates on any device without needing special permissions to install them on their hardware. Until recently, it was not possible to create 3D multiuser virtual environment-based web applications that didn't require installing a browser plugin. However, recent web standards have made it possible. We present the Virtual World Framework (VWF), a software framework for creating 3D multiuser web applications. We are using VWF to create applications for team training and collaboration. VWF can be downloaded at http://virtual.wf.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126972330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Time perception during walking in virtual environments 虚拟环境中行走时的时间感知
Pub Date : 2014-04-24 DOI: 10.1109/VR.2014.6802054
G. Bruder, Frank Steinicke
A large body of literature has analyzed differences between perception in the real world and virtual environments (VE) in terms of space, distance and speed perception. So far, no empirical data has been collected for time misperception in immersive VEs to our knowledge. However, there is evidence that time perception can deviate from veridical judgments, for instance, due to visual or auditive stimulation related to motion misperception. In this work we evaluate time perception during walking motions with a pilot study in an immersive head-mounted display (HMD) environment. Significant differences between time judgments in the real and virtual environment could not be observed.
大量文献分析了现实世界和虚拟环境(VE)在空间、距离和速度感知方面的差异。到目前为止,据我们所知,还没有收集到沉浸式虚拟机中时间错觉的经验数据。然而,有证据表明,时间感知可能偏离真实判断,例如,由于与运动错觉相关的视觉或听觉刺激。在这项工作中,我们通过一个沉浸式头戴式显示器(HMD)环境的试点研究来评估步行运动中的时间感知。真实环境和虚拟环境下的时间判断没有显著差异。
{"title":"Time perception during walking in virtual environments","authors":"G. Bruder, Frank Steinicke","doi":"10.1109/VR.2014.6802054","DOIUrl":"https://doi.org/10.1109/VR.2014.6802054","url":null,"abstract":"A large body of literature has analyzed differences between perception in the real world and virtual environments (VE) in terms of space, distance and speed perception. So far, no empirical data has been collected for time misperception in immersive VEs to our knowledge. However, there is evidence that time perception can deviate from veridical judgments, for instance, due to visual or auditive stimulation related to motion misperception. In this work we evaluate time perception during walking motions with a pilot study in an immersive head-mounted display (HMD) environment. Significant differences between time judgments in the real and virtual environment could not be observed.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115309037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Responsive audiences — Nonverbal cues as reactions to a speaker's behavior 反应灵敏的听众——对演讲者行为的非语言暗示
Pub Date : 2014-04-24 DOI: 10.1109/VR.2014.6802080
Ana-Despina Tudor, Ilinca Mustatea, Sandra Poeschl, N. Döring
Presentation skills that involve public speaking are an asset that many recognize to be important for their careers or during their study. One way to learn how to maintain eye contact and address clearly as a speaker is to use virtual audiences (VA) that simulate the reactions of a live public. A mixed-methods exploratory study has been conducted to conceptualize the design of such a VA. The purpose was to research how the nonverbal cues of live audiences vary depending on a speaker's gaze patterns (gazing towards the audience vs. gazing towards the presentation slides or notes) and vocal loudness (low vs. normal). 36 students (listeners) were videotaped during a public speaking situation. The analysis shows that the speaker's gaze patterns and vocal loudness influenced several nonverbal cues the audience displayed. The results could be implemented in the design of VAs by making them responsive in real time to variations in gazing patterns and voice loudness of the speakers (trainees).
包括公开演讲在内的演讲技巧是一项资产,许多人认为这对他们的职业生涯或学习很重要。作为演讲者,学习如何保持眼神交流和清晰讲话的一种方法是使用虚拟观众(VA)来模拟现场公众的反应。一项混合方法的探索性研究已经进行,以概念化这种视觉效果的设计。目的是研究现场观众的非语言线索如何根据演讲者的凝视模式(凝视观众还是凝视幻灯片或笔记)和声音大小(低还是正常)而变化。36名学生(听众)在一次公开演讲中被录了下来。分析表明,演讲者的凝视模式和声音的音量影响了听众所表现出的几种非语言暗示。研究结果可以应用于视觉辅助系统的设计,使其对说话者(受训者)的凝视模式和声音响度的变化做出实时响应。
{"title":"Responsive audiences — Nonverbal cues as reactions to a speaker's behavior","authors":"Ana-Despina Tudor, Ilinca Mustatea, Sandra Poeschl, N. Döring","doi":"10.1109/VR.2014.6802080","DOIUrl":"https://doi.org/10.1109/VR.2014.6802080","url":null,"abstract":"Presentation skills that involve public speaking are an asset that many recognize to be important for their careers or during their study. One way to learn how to maintain eye contact and address clearly as a speaker is to use virtual audiences (VA) that simulate the reactions of a live public. A mixed-methods exploratory study has been conducted to conceptualize the design of such a VA. The purpose was to research how the nonverbal cues of live audiences vary depending on a speaker's gaze patterns (gazing towards the audience vs. gazing towards the presentation slides or notes) and vocal loudness (low vs. normal). 36 students (listeners) were videotaped during a public speaking situation. The analysis shows that the speaker's gaze patterns and vocal loudness influenced several nonverbal cues the audience displayed. The results could be implemented in the design of VAs by making them responsive in real time to variations in gazing patterns and voice loudness of the speakers (trainees).","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132282443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The effectiveness of an AR-based context-aware assembly support system in object assembly 基于ar的上下文感知装配支持系统在对象装配中的有效性
Pub Date : 2014-04-24 DOI: 10.1109/VR.2014.6802051
Bui Minh Khuong, K. Kiyokawa, Andrew Miller, Joseph J. La Viola, T. Mashita, H. Takemura
This study evaluates the effectiveness of an AR-based context-aware assembly support system with AR visualization modes proposed in object assembly. Although many AR-based assembly support systems have been proposed, few keep track of the assembly status in real-time and automatically recognize error and completion states at each step. Naturally, the effectiveness of such context-aware systems remains unexplored. Our test-bed system displays guidance information and error detection information corresponding to the recognized assembly status in the context of building block (LEGO) assembly. A user wearing a head mounted display (HMD) can intuitively build a building block structure on a table by visually confirming correct and incorrect blocks and locating where to attach new blocks. We proposed two AR visualization modes, one of them that displays guidance information directly overlaid on the physical model, and another one in which guidance information is rendered on a virtual model adjacent to the real model. An evaluation was conducted to comparatively evaluate these AR visualization modes as well as determine the effectiveness of context-aware error detection. Our experimental results indicate the visualization mode that shows target status next to real objects of concern outperforms the traditional direct overlay under moderate registration accuracy and marker-based tracking.
本研究评估了基于AR的上下文感知装配支持系统的有效性,该系统采用AR可视化模式在对象装配中提出。尽管已经提出了许多基于ar的装配支持系统,但很少有系统能够实时跟踪装配状态并自动识别每一步的错误和完成状态。当然,这种环境感知系统的有效性仍未得到探索。我们的测试平台系统显示相应的引导信息和错误检测信息,以识别装配状态在积木(乐高)组装的背景下。用户戴上头戴式显示器(HMD),可以直观地在桌子上构建积木结构,通过视觉确认正确和不正确的积木,并确定新积木的附着位置。我们提出了两种AR可视化模式,一种是直接叠加在物理模型上显示制导信息,另一种是在与真实模型相邻的虚拟模型上呈现制导信息。对这些AR可视化模式进行了比较评估,并确定了上下文感知错误检测的有效性。实验结果表明,在中等配准精度和基于标记的跟踪条件下,将目标状态显示在实际关注对象旁边的可视化模式优于传统的直接叠加。
{"title":"The effectiveness of an AR-based context-aware assembly support system in object assembly","authors":"Bui Minh Khuong, K. Kiyokawa, Andrew Miller, Joseph J. La Viola, T. Mashita, H. Takemura","doi":"10.1109/VR.2014.6802051","DOIUrl":"https://doi.org/10.1109/VR.2014.6802051","url":null,"abstract":"This study evaluates the effectiveness of an AR-based context-aware assembly support system with AR visualization modes proposed in object assembly. Although many AR-based assembly support systems have been proposed, few keep track of the assembly status in real-time and automatically recognize error and completion states at each step. Naturally, the effectiveness of such context-aware systems remains unexplored. Our test-bed system displays guidance information and error detection information corresponding to the recognized assembly status in the context of building block (LEGO) assembly. A user wearing a head mounted display (HMD) can intuitively build a building block structure on a table by visually confirming correct and incorrect blocks and locating where to attach new blocks. We proposed two AR visualization modes, one of them that displays guidance information directly overlaid on the physical model, and another one in which guidance information is rendered on a virtual model adjacent to the real model. An evaluation was conducted to comparatively evaluate these AR visualization modes as well as determine the effectiveness of context-aware error detection. Our experimental results indicate the visualization mode that shows target status next to real objects of concern outperforms the traditional direct overlay under moderate registration accuracy and marker-based tracking.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123967745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 83
期刊
2014 IEEE Virtual Reality (VR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1