首页 > 最新文献

2009 IEEE Virtual Reality Conference最新文献

英文 中文
Measurement of Expression Characteristics in Emotional Situations using Virtual Reality 使用虚拟现实技术测量情绪情境中的表情特征
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811047
Kiwan Han, J. Ku, Hyeongrae Lee, Jinsick Park, Sangwoo Cho, Jae-Jin Kim, I. Kim, Sun I. Kim
Expressions are a basic necessity for daily living, as they are required for managing relationships with other people. Conventional expression training has difficulty achieving an objective measurement, because their assessment depends on the therapist's ability to assess a patient's state or training effectiveness. In addition, it is difficult to provide emotional and social situations in the same manner for each training or assessment session. Virtual reality techniques can overcome shortcomings occurring in conventional studies by providing exact and objective measurements and emotional and social situations. In this study, we developed a virtual reality prototype that could present emotional situation and measure expression characteristics. Although this is a preliminary study, it could be considered that this study shows the potential of virtual reality as an assessment tool.
表情是日常生活的基本必需品,因为它们是管理与他人关系所必需的。传统的表情训练很难达到客观的测量,因为他们的评估依赖于治疗师评估病人状态或训练效果的能力。此外,很难以同样的方式为每个培训或评估课程提供情感和社会情境。虚拟现实技术可以通过提供精确和客观的测量以及情感和社会情境来克服传统研究中出现的缺点。在本研究中,我们开发了一个可以呈现情绪情境和测量表情特征的虚拟现实原型。虽然这是一项初步研究,但可以认为这项研究显示了虚拟现实作为评估工具的潜力。
{"title":"Measurement of Expression Characteristics in Emotional Situations using Virtual Reality","authors":"Kiwan Han, J. Ku, Hyeongrae Lee, Jinsick Park, Sangwoo Cho, Jae-Jin Kim, I. Kim, Sun I. Kim","doi":"10.1109/VR.2009.4811047","DOIUrl":"https://doi.org/10.1109/VR.2009.4811047","url":null,"abstract":"Expressions are a basic necessity for daily living, as they are required for managing relationships with other people. Conventional expression training has difficulty achieving an objective measurement, because their assessment depends on the therapist's ability to assess a patient's state or training effectiveness. In addition, it is difficult to provide emotional and social situations in the same manner for each training or assessment session. Virtual reality techniques can overcome shortcomings occurring in conventional studies by providing exact and objective measurements and emotional and social situations. In this study, we developed a virtual reality prototype that could present emotional situation and measure expression characteristics. Although this is a preliminary study, it could be considered that this study shows the potential of virtual reality as an assessment tool.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133818036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Effects of Latency and Spatial Jitter on 2D and 3D Pointing 延迟和空间抖动对二维和三维指向的影响
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811029
Robert J. Teather, Andriy Pavlovych, W. Stuerzlinger
We investigate the effects of input device latency and spatial jitter on 2D pointing tasks and a 3D movement. First, we characterize jitter and latency in a 3D tracking device and an optical mouse used for baseline comparison. We present an experiment based on ISO 9241-9, which measures performance of pointing devices. We added latency and jitter to the mouse and compared it to a 3D tracker. Results indicate that latency has a stronger effect on performance than small spatial jitter. A second experiment found that erratic jitter "spikes" can affect 3D movement performance.
我们研究了输入设备延迟和空间抖动对2D指向任务和3D运动的影响。首先,我们在3D跟踪设备和用于基线比较的光学鼠标中表征抖动和延迟。我们提出了一个基于ISO 9241-9的实验,用于测量指向设备的性能。我们为鼠标添加了延迟和抖动,并将其与3D跟踪器进行了比较。结果表明,延迟比小空间抖动对性能的影响更大。第二个实验发现,不稳定的抖动“尖峰”会影响3D移动性能。
{"title":"Effects of Latency and Spatial Jitter on 2D and 3D Pointing","authors":"Robert J. Teather, Andriy Pavlovych, W. Stuerzlinger","doi":"10.1109/VR.2009.4811029","DOIUrl":"https://doi.org/10.1109/VR.2009.4811029","url":null,"abstract":"We investigate the effects of input device latency and spatial jitter on 2D pointing tasks and a 3D movement. First, we characterize jitter and latency in a 3D tracking device and an optical mouse used for baseline comparison. We present an experiment based on ISO 9241-9, which measures performance of pointing devices. We added latency and jitter to the mouse and compared it to a 3D tracker. Results indicate that latency has a stronger effect on performance than small spatial jitter. A second experiment found that erratic jitter \"spikes\" can affect 3D movement performance.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133037482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Virtual Heliodon: Spatially Augmented Reality for Architectural Daylighting Design 虚拟日光:建筑采光设计的空间增强现实
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811000
Yu Sheng, Theodore C. Yapo, C. Young, B. Cutler
We present an application of interactive global illumination and spatially augmented reality to architectural daylight modeling that allows designers to explore alternative designs and new technologies for improving the sustainability of their buildings. Images of a model in the real world, captured by a camera above the scene, are processed to construct a virtual 3D model. To achieve interactive rendering rates, we use a hybrid rendering technique, leveraging radiosity to simulate the inter-reflectance between diffuse patches and shadow volumes to generate per-pixel direct illumination. The rendered images are then projected on the real model by four calibrated projectors to help users study the daylighting illumination. The virtual heliodon is a physical design environment in which multiple designers, a designer and a client, or a teacher and students can gather to experience animated visualizations of the natural illumination within a proposed design by controlling the time of day, season, and climate. Furthermore, participants may interactively redesign the geometry and materials of the space by manipulating physical design elements and see the updated lighting simulation.
我们提出了交互式全局照明和空间增强现实在建筑日光建模中的应用,使设计师能够探索替代设计和新技术,以提高建筑物的可持续性。在现实世界中,一个模型的图像被场景上方的相机捕获,然后被处理成一个虚拟的3D模型。为了实现交互渲染率,我们使用混合渲染技术,利用辐射来模拟漫射斑块和阴影体积之间的反射率,以生成每像素的直接照明。然后将渲染图像通过四个校准的投影仪投影到真实模型上,以帮助用户研究日光照明。虚拟日光是一个物理设计环境,在这个环境中,多个设计师、设计师和客户、老师和学生可以聚集在一起,通过控制一天中的时间、季节和气候来体验自然照明的动画可视化设计。此外,参与者可以通过操纵物理设计元素来交互式地重新设计空间的几何形状和材料,并看到更新的照明模拟。
{"title":"Virtual Heliodon: Spatially Augmented Reality for Architectural Daylighting Design","authors":"Yu Sheng, Theodore C. Yapo, C. Young, B. Cutler","doi":"10.1109/VR.2009.4811000","DOIUrl":"https://doi.org/10.1109/VR.2009.4811000","url":null,"abstract":"We present an application of interactive global illumination and spatially augmented reality to architectural daylight modeling that allows designers to explore alternative designs and new technologies for improving the sustainability of their buildings. Images of a model in the real world, captured by a camera above the scene, are processed to construct a virtual 3D model. To achieve interactive rendering rates, we use a hybrid rendering technique, leveraging radiosity to simulate the inter-reflectance between diffuse patches and shadow volumes to generate per-pixel direct illumination. The rendered images are then projected on the real model by four calibrated projectors to help users study the daylighting illumination. The virtual heliodon is a physical design environment in which multiple designers, a designer and a client, or a teacher and students can gather to experience animated visualizations of the natural illumination within a proposed design by controlling the time of day, season, and climate. Furthermore, participants may interactively redesign the geometry and materials of the space by manipulating physical design elements and see the updated lighting simulation.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115884364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Effective Presentation Technique of Scent Using Small Ejection Quantities of Odor 利用少量气味喷射的有效气味呈现技术
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811015
Junta Sato, Kaori Ohtsu, Yuichi Bannai, Ken-ichi Okada
Trials on the transmission of olfactory information together with audio/visual information are currently underway. However, a problem exists in that continuous emission of scent leaves scent in the air causing human olfactory adaptation. To resolve this problem, we aimed at minimizing the quantity of scent ejected using an ink-jet olfactory display developed. Following the development of a breath sensor for breath synchronization, we next developed an olfactory ejection system to present scent on each inspiration. We then measured human olfactory characteristics in order to determine the most suitable method for presenting scent on an inspiration. Experiments revealed that the intensity of scent perceived by the user was altered by differences in the presentation method even when the quantity of scent was unchanged. We present here a method of odor presentation that most effectively minimizes the ejection quantities.
嗅觉信息与视听信息同时传输的试验正在进行中。然而,存在一个问题,即气味的持续释放会在空气中留下气味,从而引起人类的嗅觉适应。为了解决这个问题,我们开发了一种喷墨嗅觉显示器,旨在最大限度地减少气味喷射的数量。在开发用于呼吸同步的呼吸传感器之后,我们下一步开发了嗅觉喷射系统,以在每次吸气时呈现气味。然后,我们测量了人类的嗅觉特征,以确定在灵感上呈现气味的最合适方法。实验表明,即使在气味数量不变的情况下,用户感知到的气味强度也会因呈现方法的不同而改变。我们在这里提出了一种最有效地减少喷射量的气味呈现方法。
{"title":"Effective Presentation Technique of Scent Using Small Ejection Quantities of Odor","authors":"Junta Sato, Kaori Ohtsu, Yuichi Bannai, Ken-ichi Okada","doi":"10.1109/VR.2009.4811015","DOIUrl":"https://doi.org/10.1109/VR.2009.4811015","url":null,"abstract":"Trials on the transmission of olfactory information together with audio/visual information are currently underway. However, a problem exists in that continuous emission of scent leaves scent in the air causing human olfactory adaptation. To resolve this problem, we aimed at minimizing the quantity of scent ejected using an ink-jet olfactory display developed. Following the development of a breath sensor for breath synchronization, we next developed an olfactory ejection system to present scent on each inspiration. We then measured human olfactory characteristics in order to determine the most suitable method for presenting scent on an inspiration. Experiments revealed that the intensity of scent perceived by the user was altered by differences in the presentation method even when the quantity of scent was unchanged. We present here a method of odor presentation that most effectively minimizes the ejection quantities.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124960784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
An Image-Warping Architecture for VR: Low Latency versus Image Quality 用于VR的图像扭曲架构:低延迟与图像质量
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4810995
F. Smit, R. V. Liere, S. Beck, B. Fröhlich
Designing low end-to-end latency system architectures for virtual reality is still an open and challenging problem. We describe the design, implementation and evaluation of a client-server depth-image warping architecture that updates and displays the scene graph at the refresh rate of the display. Our approach works for scenes consisting of dynamic and interactive objects. The end-to-end latency is minimized as well as smooth object motion generated. However, this comes at the expense of image quality inherent to warping techniques. We evaluate the architecture and its design trade-offs by comparing latency and image quality to a conventional rendering system. Our experience with the system confirms that the approach facilitates common interaction tasks such as navigation and object manipulation.
设计低端到端延迟的虚拟现实系统架构仍然是一个开放和具有挑战性的问题。我们描述了一个客户端-服务器深度图像扭曲架构的设计、实现和评估,该架构以显示器的刷新率更新和显示场景图形。我们的方法适用于由动态和交互式对象组成的场景。端到端延迟最小化,并产生平滑的对象运动。然而,这是以牺牲扭曲技术固有的图像质量为代价的。我们通过将延迟和图像质量与传统渲染系统进行比较来评估架构及其设计权衡。我们使用该系统的经验证实,该方法促进了常见的交互任务,如导航和对象操作。
{"title":"An Image-Warping Architecture for VR: Low Latency versus Image Quality","authors":"F. Smit, R. V. Liere, S. Beck, B. Fröhlich","doi":"10.1109/VR.2009.4810995","DOIUrl":"https://doi.org/10.1109/VR.2009.4810995","url":null,"abstract":"Designing low end-to-end latency system architectures for virtual reality is still an open and challenging problem. We describe the design, implementation and evaluation of a client-server depth-image warping architecture that updates and displays the scene graph at the refresh rate of the display. Our approach works for scenes consisting of dynamic and interactive objects. The end-to-end latency is minimized as well as smooth object motion generated. However, this comes at the expense of image quality inherent to warping techniques. We evaluate the architecture and its design trade-offs by comparing latency and image quality to a conventional rendering system. Our experience with the system confirms that the approach facilitates common interaction tasks such as navigation and object manipulation.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115299798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
cMotion: A New Game Design to Teach Emotion Recognition and Programming Logic to Children using Virtual Humans cMotion:一种使用虚拟人来教授儿童情感识别和编程逻辑的新游戏设计
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811039
Samantha L. Finkelstein, A. Nickel, Lane Harrison, Evan A. Suma, T. Barnes
This paper presents the design of the final stage of a new game currently in development, entitled cMotion, which will use virtual humans to teach emotion recognition and programming concepts to children. Having multiple facets, cMotion is designed to teach the intended users how to recognize facial expressions and manipulate an interactive virtual character using a visual drag-and-drop programming interface. By creating a game which contextualizes emotions, we hope to foster learning of both emotions in a cultural context and computer programming concepts in children. The game will be completed in three stages which will each be tested separately: a playable introduction which focuses on social skills and emotion recognition, an interactive interface which focuses on computer programming, and a full game which combines the first two stages into one activity.
本文介绍了目前正在开发的一款名为cMotion的新游戏的最后阶段设计,该游戏将使用虚拟人向儿童教授情感识别和编程概念。具有多个方面,cMotion的目的是教用户如何识别面部表情和操纵使用视觉拖放编程界面的交互式虚拟角色。通过创造一款情境化情感的游戏,我们希望促进儿童在文化背景下的情感学习和计算机编程概念的学习。游戏将分三个阶段完成,每个阶段将分别进行测试:一个可玩的介绍,侧重于社交技能和情感识别,一个交互式界面,侧重于计算机编程,以及一个完整的游戏,将前两个阶段结合为一个活动。
{"title":"cMotion: A New Game Design to Teach Emotion Recognition and Programming Logic to Children using Virtual Humans","authors":"Samantha L. Finkelstein, A. Nickel, Lane Harrison, Evan A. Suma, T. Barnes","doi":"10.1109/VR.2009.4811039","DOIUrl":"https://doi.org/10.1109/VR.2009.4811039","url":null,"abstract":"This paper presents the design of the final stage of a new game currently in development, entitled cMotion, which will use virtual humans to teach emotion recognition and programming concepts to children. Having multiple facets, cMotion is designed to teach the intended users how to recognize facial expressions and manipulate an interactive virtual character using a visual drag-and-drop programming interface. By creating a game which contextualizes emotions, we hope to foster learning of both emotions in a cultural context and computer programming concepts in children. The game will be completed in three stages which will each be tested separately: a playable introduction which focuses on social skills and emotion recognition, an interactive interface which focuses on computer programming, and a full game which combines the first two stages into one activity.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127161425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 57
Eye Tracking for Avatar Eye Gaze Control During Object-Focused Multiparty Interaction in Immersive Collaborative Virtual Environments 沉浸式协同虚拟环境中以对象为中心的多方交互中虚拟角色眼睛注视控制的眼动追踪
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811003
W. Steptoe, Oyewole Oyekoya, A. Murgia, R. Wolff, John P Rae, Estefania Guimaraes, D. Roberts, A. Steed
In face-to-face collaboration, eye gaze is used both as a bidirectional signal to monitor and indicate focus of attention and action, as well as a resource to manage the interaction. In remote interaction supported by Immersive Collaborative Virtual Environments (ICVEs), embodied avatars representing and controlled by each participant share a virtual space. We report on a study designed to evaluate methods of avatar eye gaze control during an object-focused puzzle scenario performed between three networked CAVETM-like systems. We compare tracked gaze, in which avatars' eyes are controlled by head-mounted mobile eye trackers worn by participants, to a gaze model informed by head orientation for saccade generation, and static gaze featuring non-moving eyes. We analyse task performance, subjective user experience, and interactional behaviour. While not providing statistically significant benefit over static gaze, tracked gaze is observed as the highest performing condition. However, the gaze model resulted in significantly lower task performance and increased error rate.
在面对面的协作中,眼睛凝视既是一种双向信号,用来监测和指示注意力和行动的焦点,也是一种管理互动的资源。在沉浸式协作虚拟环境(ICVEs)支持的远程交互中,由每个参与者代表和控制的化身共享虚拟空间。我们报告了一项研究,旨在评估在三个网络CAVETM-like系统之间执行的对象聚焦谜题场景中虚拟角色眼睛注视控制方法。我们比较了跟踪凝视,其中角色的眼睛由参与者佩戴的头戴式移动眼动仪控制,注视模型由头部方向决定,以产生扫视,静态凝视具有不移动的眼睛。我们分析任务性能、主观用户体验和交互行为。虽然与静态凝视相比没有统计学上的显著优势,但跟踪凝视被认为是表现最好的状态。然而,注视模型显著降低了任务性能,增加了错误率。
{"title":"Eye Tracking for Avatar Eye Gaze Control During Object-Focused Multiparty Interaction in Immersive Collaborative Virtual Environments","authors":"W. Steptoe, Oyewole Oyekoya, A. Murgia, R. Wolff, John P Rae, Estefania Guimaraes, D. Roberts, A. Steed","doi":"10.1109/VR.2009.4811003","DOIUrl":"https://doi.org/10.1109/VR.2009.4811003","url":null,"abstract":"In face-to-face collaboration, eye gaze is used both as a bidirectional signal to monitor and indicate focus of attention and action, as well as a resource to manage the interaction. In remote interaction supported by Immersive Collaborative Virtual Environments (ICVEs), embodied avatars representing and controlled by each participant share a virtual space. We report on a study designed to evaluate methods of avatar eye gaze control during an object-focused puzzle scenario performed between three networked CAVETM-like systems. We compare tracked gaze, in which avatars' eyes are controlled by head-mounted mobile eye trackers worn by participants, to a gaze model informed by head orientation for saccade generation, and static gaze featuring non-moving eyes. We analyse task performance, subjective user experience, and interactional behaviour. While not providing statistically significant benefit over static gaze, tracked gaze is observed as the highest performing condition. However, the gaze model resulted in significantly lower task performance and increased error rate.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121934079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Issues with Virtual Space Perception within Reaching Distance: Mitigating Adverse Effects on Applications Using HMDs in the Automotive Industry 可达距离内的虚拟空间感知问题:减轻在汽车工业中使用hmd应用的不利影响
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811027
Mathias Moehring, Antje Gloystein, R. Dörner
Besides visual validations of virtual car models, immersive applications like a Virtual Seating Buck enable car designers and engineers to decide product related issues without building expensive hardware prototypes. For replacing real models, it is mandatory that decision makers can rely on VR-based findings. However, especially when using a Head Mounted Display, users complain about an unnatural perception of space. Such misperceptions have already been reported in literature where several evaluation methods have been proposed for researching possible causes. Unfortunately, most of the methods do not represent the scenarios usually found in the automotive industry, since they focus on too large distances of five to fifteen meters. In this paper, we present an evaluation scenario adapted to size and distance perception within the reach of the user. With this method, we analyzed our standard setups and found a systematic error that is lower than aberrations reported by earlier research work. Furthermore, we tried to mitigate perception errors by a Depth of Field Blur applied to the virtual images.
除了虚拟汽车模型的视觉验证之外,像虚拟座椅Buck这样的沉浸式应用程序使汽车设计师和工程师能够在不构建昂贵的硬件原型的情况下决定产品相关问题。为了取代真实模型,决策者必须依靠基于vr的研究结果。然而,尤其是在使用头戴式显示器时,用户抱怨对空间的感知不自然。这种误解已经在文献中报道过,其中提出了几种评估方法来研究可能的原因。不幸的是,大多数方法不能代表通常在汽车工业中发现的场景,因为它们关注的距离太大,如5到15米。在本文中,我们提出了一个评估场景,适用于用户可及范围内的大小和距离感知。用这种方法,我们分析了我们的标准设置,发现系统误差低于早期研究工作报告的像差。此外,我们试图通过应用于虚拟图像的景深模糊来减轻感知错误。
{"title":"Issues with Virtual Space Perception within Reaching Distance: Mitigating Adverse Effects on Applications Using HMDs in the Automotive Industry","authors":"Mathias Moehring, Antje Gloystein, R. Dörner","doi":"10.1109/VR.2009.4811027","DOIUrl":"https://doi.org/10.1109/VR.2009.4811027","url":null,"abstract":"Besides visual validations of virtual car models, immersive applications like a Virtual Seating Buck enable car designers and engineers to decide product related issues without building expensive hardware prototypes. For replacing real models, it is mandatory that decision makers can rely on VR-based findings. However, especially when using a Head Mounted Display, users complain about an unnatural perception of space. Such misperceptions have already been reported in literature where several evaluation methods have been proposed for researching possible causes. Unfortunately, most of the methods do not represent the scenarios usually found in the automotive industry, since they focus on too large distances of five to fifteen meters. In this paper, we present an evaluation scenario adapted to size and distance perception within the reach of the user. With this method, we analyzed our standard setups and found a systematic error that is lower than aberrations reported by earlier research work. Furthermore, we tried to mitigate perception errors by a Depth of Field Blur applied to the virtual images.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124234143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Effect of Proprioception Training of patient with Hemiplegia by Manipulating Visual Feedback using Virtual Reality: The Preliminary results 利用虚拟现实操纵视觉反馈对偏瘫患者本体感觉训练的效果:初步结果
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811056
Sangwoo Cho, J. Ku, Kiwan Han, Hyeongrae Lee, Jinsick Park, Y. Kang, I. Kim, Sun I. Kim
In this study, we confirmed proprioception training effect of patients with hemiplegia by manipulating visual feedback. Six patients with hemiplegia were participated in the experiment. Patients have trained with the reaching task with visual feedback without visual feedback for two weeks. Patients were evaluated with pre-, middle test and post-test with the task with and without visual feedback. In the results, the first-click error distance after the training of the reaching task was reduced when they got the training with the task removed visual feedback. In addition, the performance velocity profile of reaching movement formed an inverse U shape after the training. In conclusion, visual feedback manipulation using virtual reality could provide a tool for training reaching movement by enforcing to use their proprioception, which enhances reaching movement skills for patients with hemiplegia.
在本研究中,我们通过操纵视觉反馈来证实偏瘫患者本体感觉训练的效果。6名偏瘫患者参与了实验。患者接受了两周的无视觉反馈的有视觉反馈的伸手任务训练。对患者进行有视觉反馈和无视觉反馈的任务前、中、后测试。结果显示,当受试者接受去除视觉反馈任务的训练时,完成伸手任务训练后的第一点击误差距离减小。此外,在训练后,到达动作的表现速度分布呈倒U型。综上所述,利用虚拟现实技术的视觉反馈操作可以为偏瘫患者提供一种工具,通过强制使用他们的本体感觉来训练伸展运动,从而提高伸展运动技能。
{"title":"Effect of Proprioception Training of patient with Hemiplegia by Manipulating Visual Feedback using Virtual Reality: The Preliminary results","authors":"Sangwoo Cho, J. Ku, Kiwan Han, Hyeongrae Lee, Jinsick Park, Y. Kang, I. Kim, Sun I. Kim","doi":"10.1109/VR.2009.4811056","DOIUrl":"https://doi.org/10.1109/VR.2009.4811056","url":null,"abstract":"In this study, we confirmed proprioception training effect of patients with hemiplegia by manipulating visual feedback. Six patients with hemiplegia were participated in the experiment. Patients have trained with the reaching task with visual feedback without visual feedback for two weeks. Patients were evaluated with pre-, middle test and post-test with the task with and without visual feedback. In the results, the first-click error distance after the training of the reaching task was reduced when they got the training with the task removed visual feedback. In addition, the performance velocity profile of reaching movement formed an inverse U shape after the training. In conclusion, visual feedback manipulation using virtual reality could provide a tool for training reaching movement by enforcing to use their proprioception, which enhances reaching movement skills for patients with hemiplegia.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"2 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134245750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Interactive Odor Playback Based on Fluid Dynamics Simulation 基于流体动力学仿真的交互式气味回放
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811042
H. Matsukura, Hitoshi Yoshida, H. Ishida, T. Nakamoto
This article describes the experiments on an interactive application of an olfactory display system into which computational fluid dynamics (CFD) simulation is incorporated. In the proposed system, the olfactory display is used to add special effects to movies and virtual reality systems by releasing odors relevant to the scenes shown on the computer screen. To provide high-presence olfactory stimuli to the users, a model of the environment shown in the scene is provided to a CFD solver. The airflow field in the environment and the dispersal of odor molecules from their source are then calculated. An odor blender is used to generate the odor with the concentration determined based on the calculated odor distribution. In the experiments, a virtual room was presented on a PC monitor, and the panel were asked to stroll in the room to find an odor source. The results showed the effectiveness of the CFD simulation in reproducing the spatial distribution of the odor in the virtual space.
本文介绍了结合计算流体动力学(CFD)仿真的嗅觉显示系统的交互式应用实验。在提出的系统中,嗅觉显示器通过释放与计算机屏幕上显示的场景相关的气味来为电影和虚拟现实系统添加特效。为了向用户提供高存在感的嗅觉刺激,将场景中显示的环境模型提供给CFD求解器。然后计算环境中的气流场和气味分子从其来源的扩散。使用气味混合器产生气味,并根据计算的气味分布确定浓度。在实验中,电脑显示器上显示了一个虚拟的房间,研究小组被要求在房间里漫步,寻找气味来源。结果表明,CFD模拟在虚拟空间中再现气味的空间分布是有效的。
{"title":"Interactive Odor Playback Based on Fluid Dynamics Simulation","authors":"H. Matsukura, Hitoshi Yoshida, H. Ishida, T. Nakamoto","doi":"10.1109/VR.2009.4811042","DOIUrl":"https://doi.org/10.1109/VR.2009.4811042","url":null,"abstract":"This article describes the experiments on an interactive application of an olfactory display system into which computational fluid dynamics (CFD) simulation is incorporated. In the proposed system, the olfactory display is used to add special effects to movies and virtual reality systems by releasing odors relevant to the scenes shown on the computer screen. To provide high-presence olfactory stimuli to the users, a model of the environment shown in the scene is provided to a CFD solver. The airflow field in the environment and the dispersal of odor molecules from their source are then calculated. An odor blender is used to generate the odor with the concentration determined based on the calculated odor distribution. In the experiments, a virtual room was presented on a PC monitor, and the panel were asked to stroll in the room to find an odor source. The results showed the effectiveness of the CFD simulation in reproducing the spatial distribution of the odor in the virtual space.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134223178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
期刊
2009 IEEE Virtual Reality Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1