首页 > 最新文献

2009 IEEE Virtual Reality Conference最新文献

英文 中文
Demonstration of Improved Olfactory Display using Rapidly-Switching Solenoid Valves 使用快速开关电磁阀改进嗅觉显示的演示
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811065
T. Nakamoto, M. Kinoshita, Keisuke Murakami, Y. Ariyakul
This article shows our research efforts toward the development of an olfactory display system for presenting odors with a vivid sense of reality. A multi-component olfactory display is developed, which enables generation of a variety of odors by blending multiple odor vapors with arbitrary ratios. In this research demo, the two contents using the olfactory display will be demonstrated. The first one is the experiment on odor approximation using odor components extracted from a mass spectrum database. The second one is the reproduction of video with smell. The temporal changes of odor kind and its concentration can be recorded and can be presented together with movie obtained from a web camera. People can enjoy the recorded video with scents.
本文介绍了我们为开发一种具有逼真现实感的气味显示系统所做的研究工作。开发了一种多组分嗅觉显示器,通过将多种气味蒸汽以任意比例混合,可以产生多种气味。在这个研究演示中,将展示使用嗅觉显示的两个内容。首先是利用从质谱数据库中提取的气味成分进行气味近似实验。第二种是带气味的视频再现。可以记录气味种类及其浓度的时间变化,并与网络摄像机拍摄的影片一起呈现。人们可以在香味中欣赏录制的视频。
{"title":"Demonstration of Improved Olfactory Display using Rapidly-Switching Solenoid Valves","authors":"T. Nakamoto, M. Kinoshita, Keisuke Murakami, Y. Ariyakul","doi":"10.1109/VR.2009.4811065","DOIUrl":"https://doi.org/10.1109/VR.2009.4811065","url":null,"abstract":"This article shows our research efforts toward the development of an olfactory display system for presenting odors with a vivid sense of reality. A multi-component olfactory display is developed, which enables generation of a variety of odors by blending multiple odor vapors with arbitrary ratios. In this research demo, the two contents using the olfactory display will be demonstrated. The first one is the experiment on odor approximation using odor components extracted from a mass spectrum database. The second one is the reproduction of video with smell. The temporal changes of odor kind and its concentration can be recorded and can be presented together with movie obtained from a web camera. People can enjoy the recorded video with scents.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122324201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Indoor vs. Outdoor Depth Perception for Mobile Augmented Reality 移动增强现实的室内与室外深度感知
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4810999
M. Livingston, Zhuming Ai, J. Swan, H. Smallman
We tested users' depth perception of virtual objects in our mobile augmented reality (AR) system in both indoor and outdoor environments using a depth matching task. The indoor environment is characterized by strong linear perspective cues; we attempted to re-create these cues in the outdoor environment. In the indoor environment, we found an overall pattern of underestimation of depth that is typical for virtual environments and AR systems. However, in the outdoor environment, we found that subjects overestimated depth. In addition, our synthetic linear perspective cues met with a measure of success, leading users to reduce their estimate of the depth of distant objects. We describe the experimental procedure, analyze the data, present the results of the study, and discuss the implications for mobile, outdoor AR systems.
在我们的移动增强现实(AR)系统中,我们在室内和室外环境中使用深度匹配任务测试了用户对虚拟物体的深度感知。室内环境的特点是强烈的线性视角线索;我们试图在室外环境中重新创造这些线索。在室内环境中,我们发现了虚拟环境和AR系统中典型的深度低估的总体模式。然而,在户外环境中,我们发现受试者高估了深度。此外,我们的合成线性视角提示在一定程度上取得了成功,引导用户减少对远处物体深度的估计。我们描述了实验过程,分析了数据,提出了研究结果,并讨论了对移动户外AR系统的影响。
{"title":"Indoor vs. Outdoor Depth Perception for Mobile Augmented Reality","authors":"M. Livingston, Zhuming Ai, J. Swan, H. Smallman","doi":"10.1109/VR.2009.4810999","DOIUrl":"https://doi.org/10.1109/VR.2009.4810999","url":null,"abstract":"We tested users' depth perception of virtual objects in our mobile augmented reality (AR) system in both indoor and outdoor environments using a depth matching task. The indoor environment is characterized by strong linear perspective cues; we attempted to re-create these cues in the outdoor environment. In the indoor environment, we found an overall pattern of underestimation of depth that is typical for virtual environments and AR systems. However, in the outdoor environment, we found that subjects overestimated depth. In addition, our synthetic linear perspective cues met with a measure of success, leading users to reduce their estimate of the depth of distant objects. We describe the experimental procedure, analyze the data, present the results of the study, and discuss the implications for mobile, outdoor AR systems.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126769363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
Virtual Humans That Touch Back: Enhancing Nonverbal Communication with Virtual Humans through Bidirectional Touch 触碰的虚拟人:通过双向触碰增强与虚拟人的非语言交流
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811019
Aaron Kotranza, Benjamin C. Lok, C. Pugh, D. Lind
Touch is a powerful component of human communication, yet has been largely absent in communication between humans and virtual humans (VHs). This paper expands on recent work which allowed unidirectional touch from human to VH, by evaluating bidirectional touch as a new channel for nonverbal communication. A VH augmented with a haptic interface is able to touch her interaction partner using a pseudo-haptic touch or an active-haptic touch from a co-located mechanical arm. Within the context of a simulated doctor-patient interaction, two user studies (n = 54) investigate how touch can be used by both human and VH to communicate. Results show that human-to-VH touch is used for the same communication purposes as human-to-human touch, and that VH-to-human touch (pseudo-haptic and active-haptic) allows the VH to communicate with its human interaction partner. The enhanced nonverbal communication provided by bidirectional touch has the potential to solve difficult problems in VH research, such as disambiguating user speech, enforcing social norms, and achieving rapport with VHs.
触摸是人类交流的一个重要组成部分,但在人类和虚拟人(VHs)之间的交流中却很少出现。本文通过评估双向触摸作为非语言交流的新渠道,扩展了最近允许从人类到VH的单向触摸的工作。增强了触觉界面的VH能够使用伪触觉触摸或来自同一位置的机械手臂的主动触觉触摸来触摸她的互动伙伴。在模拟医患互动的背景下,两项用户研究(n = 54)调查了人类和VH如何使用触摸进行交流。结果表明,人对人的触摸与人对人的触摸具有相同的交流目的,并且人对人的触摸(伪触觉和主动触觉)允许VH与其人类互动伙伴进行交流。双向触摸所提供的增强的非语言交流有可能解决VH研究中的难题,例如消除用户语言的歧义,执行社会规范,以及与VH建立融洽关系。
{"title":"Virtual Humans That Touch Back: Enhancing Nonverbal Communication with Virtual Humans through Bidirectional Touch","authors":"Aaron Kotranza, Benjamin C. Lok, C. Pugh, D. Lind","doi":"10.1109/VR.2009.4811019","DOIUrl":"https://doi.org/10.1109/VR.2009.4811019","url":null,"abstract":"Touch is a powerful component of human communication, yet has been largely absent in communication between humans and virtual humans (VHs). This paper expands on recent work which allowed unidirectional touch from human to VH, by evaluating bidirectional touch as a new channel for nonverbal communication. A VH augmented with a haptic interface is able to touch her interaction partner using a pseudo-haptic touch or an active-haptic touch from a co-located mechanical arm. Within the context of a simulated doctor-patient interaction, two user studies (n = 54) investigate how touch can be used by both human and VH to communicate. Results show that human-to-VH touch is used for the same communication purposes as human-to-human touch, and that VH-to-human touch (pseudo-haptic and active-haptic) allows the VH to communicate with its human interaction partner. The enhanced nonverbal communication provided by bidirectional touch has the potential to solve difficult problems in VH research, such as disambiguating user speech, enforcing social norms, and achieving rapport with VHs.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116128006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Communicating Eye-gaze Across a Distance: Comparing an Eye-gaze enabled Immersive Collaborative Virtual Environment, Aligned Video Conferencing, and Being Together 远距离交流眼神:比较眼神支持的沉浸式协作虚拟环境、对齐视频会议和在一起
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811013
D. Roberts, R. Wolff, John P Rae, A. Steed, R. Aspin, Moira McIntyre, Adriana Pena, Oyewole Oyekoya, W. Steptoe
Eye gaze is an important and widely studied non-verbal resource in co-located social interaction. When we attempt to support tele-presence between people, there are two main technologies that can be used today: video-conferencing (VC) and collaborative virtual environments (CVEs). In VC, one can observe eye-gaze behaviour but practically the targets of eye-gaze are only correct if the participants remain relatively still. We attempt to support eye-gaze behaviour in an unconstrained manner by integrating eye-trackers into an Immersive CVE (ICVE) system. This paper aims to show that while both ICVE and VC allow people to discern being looked at and what else is looked at, when someone gazes into their space from another location, ICVE alone can continue to do this as people move. The conditions of aligned VC, ICVE, eye-gaze enabled ICVE and co-location are compared. The impact of factors of alignment, lighting, resolution, and perspective distortion are minimised through a set of pilot experiments, before a formal experiment records results for optimal settings. Results show that both VC and ICVE support eye-gaze in constrained situations, but only ICVE supports movement of the observer. We quantify the mis-judgements that are made and discuss how our findings might inform research into supporting eye-gaze through interpolated free viewpoint video based methods.
眼神注视是一种重要的非语言资源,在同地社会交往中被广泛研究。当我们试图支持人与人之间的远程呈现时,有两种主要的技术可以使用:视频会议(VC)和协作虚拟环境(CVEs)。在VC中,人们可以观察到眼睛注视的行为,但实际上眼睛注视的目标只有在参与者保持相对静止的情况下才是正确的。我们试图通过将眼动追踪器集成到沉浸式CVE (ICVE)系统中,以不受约束的方式支持眼球注视行为。本文旨在表明,虽然ICVE和VC都能让人们辨别被看和被看的东西,但当有人从另一个位置凝视他们的空间时,ICVE可以在人们移动时继续这样做。比较了对齐VC、ICVE、眼注视启用ICVE和共定位的条件。在正式的实验记录最佳设置结果之前,通过一组试点实验,将对齐、照明、分辨率和透视失真等因素的影响降至最低。结果表明,在受限情况下,VC和ICVE都支持人眼注视,但只有ICVE支持观察者的运动。我们量化了所做的错误判断,并讨论了我们的发现如何通过基于插值的免费视点视频方法为支持眼睛凝视的研究提供信息。
{"title":"Communicating Eye-gaze Across a Distance: Comparing an Eye-gaze enabled Immersive Collaborative Virtual Environment, Aligned Video Conferencing, and Being Together","authors":"D. Roberts, R. Wolff, John P Rae, A. Steed, R. Aspin, Moira McIntyre, Adriana Pena, Oyewole Oyekoya, W. Steptoe","doi":"10.1109/VR.2009.4811013","DOIUrl":"https://doi.org/10.1109/VR.2009.4811013","url":null,"abstract":"Eye gaze is an important and widely studied non-verbal resource in co-located social interaction. When we attempt to support tele-presence between people, there are two main technologies that can be used today: video-conferencing (VC) and collaborative virtual environments (CVEs). In VC, one can observe eye-gaze behaviour but practically the targets of eye-gaze are only correct if the participants remain relatively still. We attempt to support eye-gaze behaviour in an unconstrained manner by integrating eye-trackers into an Immersive CVE (ICVE) system. This paper aims to show that while both ICVE and VC allow people to discern being looked at and what else is looked at, when someone gazes into their space from another location, ICVE alone can continue to do this as people move. The conditions of aligned VC, ICVE, eye-gaze enabled ICVE and co-location are compared. The impact of factors of alignment, lighting, resolution, and perspective distortion are minimised through a set of pilot experiments, before a formal experiment records results for optimal settings. Results show that both VC and ICVE support eye-gaze in constrained situations, but only ICVE supports movement of the observer. We quantify the mis-judgements that are made and discuss how our findings might inform research into supporting eye-gaze through interpolated free viewpoint video based methods.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125104817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
iPhone/iPod Touch as Input Devices for Navigation in Immersive Virtual Environments iPhone/iPod Touch作为沉浸式虚拟环境中的导航输入设备
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811045
Ji-Sun Kim, D. Gračanin, K. Matkovič, Francis K. H. Quek
iPhone and iPod Touch are multi-touch handheld devices that provide new possibilities for interaction techniques. We describe iPhone/iPod Touch implementation of a navigation interaction technique originally developed for a larger multi-touch device (i.e. Lemur). The interaction technique implemented on an iPhone/iPod Touch was used for navigation tasks in a CAVE virtual environment. We performed a pilot study to measure the control accuracy and to observe how human subjects respond to the interaction technique on the iPhone and iPod Touch devices. We used the preliminary results to improve the design of the interaction technique.
iPhone和iPod Touch是多触点手持设备,为交互技术提供了新的可能性。我们描述了iPhone/iPod Touch的导航交互技术的实现,该技术最初是为大型多点触控设备(如Lemur)开发的。将在iPhone/iPod Touch上实现的交互技术用于CAVE虚拟环境中的导航任务。我们进行了一项初步研究,以测量控制精度,并观察人类受试者对iPhone和iPod Touch设备上的交互技术的反应。我们利用初步的结果来改进交互技术的设计。
{"title":"iPhone/iPod Touch as Input Devices for Navigation in Immersive Virtual Environments","authors":"Ji-Sun Kim, D. Gračanin, K. Matkovič, Francis K. H. Quek","doi":"10.1109/VR.2009.4811045","DOIUrl":"https://doi.org/10.1109/VR.2009.4811045","url":null,"abstract":"iPhone and iPod Touch are multi-touch handheld devices that provide new possibilities for interaction techniques. We describe iPhone/iPod Touch implementation of a navigation interaction technique originally developed for a larger multi-touch device (i.e. Lemur). The interaction technique implemented on an iPhone/iPod Touch was used for navigation tasks in a CAVE virtual environment. We performed a pilot study to measure the control accuracy and to observe how human subjects respond to the interaction technique on the iPhone and iPod Touch devices. We used the preliminary results to improve the design of the interaction technique.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125681266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Composable Volumetric Lenses for Surface Exploration 用于表面探测的可组合体积透镜
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811060
Jan-Phillip Tiesel, C. Borst, Kaushik Das, G. Kinsland, Christopher M. Best, Vijay B. Baiyya
We demonstrate composable volumetric lenses as interpretational tools for geological visualization. The lenses provide a constrained focus region that provides alternative views of datasets to the user while maintaining the context of surrounding features. Our rendering method is based on run-time composition of GPU shader programs that implement per-fragment clipping to lens boundaries and surface shader evaluation. It supports composition of lenses and the user can influence the resulting visualization by interactively changing the order in which the individual lens effects are applied. Multiple shader effects have been created and used for interpretation of high-resolution elevation datasets (like LTDAR and SRTM) in our lab.
我们展示了可组合的体积透镜作为地质可视化的解释工具。透镜提供了一个受限的焦点区域,为用户提供了数据集的替代视图,同时保持了周围特征的上下文。我们的渲染方法是基于GPU着色器程序的运行时组合,该程序实现了对镜头边界的逐片段剪辑和表面着色器评估。它支持镜头的组合,并且用户可以通过交互式地改变应用各个镜头效果的顺序来影响结果的可视化。在我们的实验室中,已经创建了多个着色器效果并用于解释高分辨率高程数据集(如LTDAR和SRTM)。
{"title":"Composable Volumetric Lenses for Surface Exploration","authors":"Jan-Phillip Tiesel, C. Borst, Kaushik Das, G. Kinsland, Christopher M. Best, Vijay B. Baiyya","doi":"10.1109/VR.2009.4811060","DOIUrl":"https://doi.org/10.1109/VR.2009.4811060","url":null,"abstract":"We demonstrate composable volumetric lenses as interpretational tools for geological visualization. The lenses provide a constrained focus region that provides alternative views of datasets to the user while maintaining the context of surrounding features. Our rendering method is based on run-time composition of GPU shader programs that implement per-fragment clipping to lens boundaries and surface shader evaluation. It supports composition of lenses and the user can influence the resulting visualization by interactively changing the order in which the individual lens effects are applied. Multiple shader effects have been created and used for interpretation of high-resolution elevation datasets (like LTDAR and SRTM) in our lab.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133827341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Creating Virtual 3D See-Through Experiences on Large-size 2D Displays 在大尺寸2D显示器上创建虚拟3D透视体验
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811033
Chang Yuan
This paper describes a novel approach for creating virtual 3D see-through experiences on large-size 2D displays. The approach aims at simulating the real-life experience of observing an outside world through one or multiple windows. Here, the display screen serves as a virtual window such that viewers moving in front of the display can observe different part of the scene which is also moving in the opposite direction. In order to generate this see-through experience, a virtual 3D scene is first created and placed behind the display. Then, the viewers' 3D positions and motion are tracked by a 3D viewer tracking method based on multiple infra-red cameras. The virtual scene is rendered by 3D graphics engines in real-time speed, based on the tracked viewer positions. A prototype system has been implemented on a large-size tiled display and was able to bring viewers a realistic and natural 3D see-through experience.
本文描述了一种在大尺寸2D显示器上创建虚拟3D透视体验的新方法。该方法旨在模拟通过一个或多个窗口观察外部世界的真实体验。在这里,显示屏作为一个虚拟的窗口,在显示屏前移动的观众可以观察到场景的不同部分,这些部分也在相反的方向移动。为了产生这种透视体验,首先创建一个虚拟3D场景并放置在显示器后面。然后,通过基于多红外摄像机的三维观看者跟踪方法跟踪观看者的三维位置和运动。虚拟场景由3D图形引擎实时渲染,基于跟踪的观看者位置。一个原型系统已经在一个大尺寸的平铺显示器上实现,能够给观众带来真实自然的3D透视体验。
{"title":"Creating Virtual 3D See-Through Experiences on Large-size 2D Displays","authors":"Chang Yuan","doi":"10.1109/VR.2009.4811033","DOIUrl":"https://doi.org/10.1109/VR.2009.4811033","url":null,"abstract":"This paper describes a novel approach for creating virtual 3D see-through experiences on large-size 2D displays. The approach aims at simulating the real-life experience of observing an outside world through one or multiple windows. Here, the display screen serves as a virtual window such that viewers moving in front of the display can observe different part of the scene which is also moving in the opposite direction. In order to generate this see-through experience, a virtual 3D scene is first created and placed behind the display. Then, the viewers' 3D positions and motion are tracked by a 3D viewer tracking method based on multiple infra-red cameras. The virtual scene is rendered by 3D graphics engines in real-time speed, based on the tracked viewer positions. A prototype system has been implemented on a large-size tiled display and was able to bring viewers a realistic and natural 3D see-through experience.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"2011 30","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114087415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A Game Theoretic Approach for Modeling User-System Interaction in Networked Virtual Environments 网络虚拟环境中用户系统交互建模的博弈论方法
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811053
Shaimaa Y. Lazem, D. Gračanin, Ayman A. Abdel-Hamid
Networked Virtual Environments (NVEs) are distributed 3D simulations shared among geographically dispersed users. Adaptive resource allocation is a key issue in NVEs, since user interactions affect system resources which in turn affect the user's experience. Such interplay between the users and the system can be modeled using Game Theory. Game Theory is an analytical tool that studies decision-making between interacting agents (players), where player decisions impact greatly other players. We propose a basic structure for a Game Theory model that describes the interaction between the users and the system in NVEs based on an exploratory study of mobile virtual environments.
网络虚拟环境(NVEs)是地理上分散的用户共享的分布式三维模拟。自适应资源分配是NVEs中的一个关键问题,因为用户交互会影响系统资源,而系统资源又会影响用户的体验。这种用户和系统之间的相互作用可以用博弈论来建模。博弈论是一种分析工具,研究相互作用的代理(玩家)之间的决策,其中玩家的决策对其他玩家的影响很大。基于对移动虚拟环境的探索性研究,我们提出了一个描述NVEs中用户与系统交互的博弈论模型的基本结构。
{"title":"A Game Theoretic Approach for Modeling User-System Interaction in Networked Virtual Environments","authors":"Shaimaa Y. Lazem, D. Gračanin, Ayman A. Abdel-Hamid","doi":"10.1109/VR.2009.4811053","DOIUrl":"https://doi.org/10.1109/VR.2009.4811053","url":null,"abstract":"Networked Virtual Environments (NVEs) are distributed 3D simulations shared among geographically dispersed users. Adaptive resource allocation is a key issue in NVEs, since user interactions affect system resources which in turn affect the user's experience. Such interplay between the users and the system can be modeled using Game Theory. Game Theory is an analytical tool that studies decision-making between interacting agents (players), where player decisions impact greatly other players. We propose a basic structure for a Game Theory model that describes the interaction between the users and the system in NVEs based on an exploratory study of mobile virtual environments.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"791 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114095698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Hybrid Rendering in a Multi-framework VR System 多框架VR系统中的混合渲染
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811046
G. Marino, F. Tecchia, D. Vercelli, M. Bergamasco
The work addresses the topic of software integration in the context of complex Virtual Reality applications. We propose a novel method to render in a single graphical context objects handled by separate programs based on different VR frameworks, even when the various applications are running on different machines. Our technique is based on a network rendering architecture, where both origin and destination applications communicate through a set of protocols that deliver compressed graphical instructions and keep the machines synchronized. Existing applications require minimal changes in order to be compatible with our system; in particular we tested the validity of our approach on a number of rendering frameworks (Ogre3D, OpenSceneGraph, Torque, XVR). We believe that our technique can solve many integration burdens typically encountered in practical situations.
该工作解决了复杂虚拟现实应用背景下的软件集成主题。我们提出了一种新颖的方法,即使不同的应用程序在不同的机器上运行,也可以在基于不同VR框架的单独程序处理的单个图形上下文中呈现对象。我们的技术基于网络呈现体系结构,其中源应用程序和目标应用程序通过一组协议进行通信,这些协议提供压缩的图形指令并保持机器同步。现有的应用程序需要最小的更改,以便与我们的系统兼容;特别是,我们在许多渲染框架(Ogre3D, OpenSceneGraph, Torque, XVR)上测试了我们方法的有效性。我们相信我们的技术可以解决在实际情况中通常遇到的许多集成负担。
{"title":"Hybrid Rendering in a Multi-framework VR System","authors":"G. Marino, F. Tecchia, D. Vercelli, M. Bergamasco","doi":"10.1109/VR.2009.4811046","DOIUrl":"https://doi.org/10.1109/VR.2009.4811046","url":null,"abstract":"The work addresses the topic of software integration in the context of complex Virtual Reality applications. We propose a novel method to render in a single graphical context objects handled by separate programs based on different VR frameworks, even when the various applications are running on different machines. Our technique is based on a network rendering architecture, where both origin and destination applications communicate through a set of protocols that deliver compressed graphical instructions and keep the machines synchronized. Existing applications require minimal changes in order to be compatible with our system; in particular we tested the validity of our approach on a number of rendering frameworks (Ogre3D, OpenSceneGraph, Torque, XVR). We believe that our technique can solve many integration burdens typically encountered in practical situations.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129365980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multimodal Interface for Artifact's Exploration 工件探索的多模态界面
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811054
P. Figueroa, J. Borda, Diego Restrepo, P. Boulanger, Eduardo Londoño, F. Prieto
We present an integrated interface that takes advantage of several VR technologies for the exploration of small artifacts in a Museum. This interface allows visitors to observe pieces in 3D from several viewpoints, touch them, feel their weight, and hear their sound at touch. From one hand, it will allow visitors to observe artifacts closer, in order to know more about selected pieces and to enhance their overall experience in a Museum. From the other hand, it lever-ages existing technologies in order to provide a multimodal interface, which can be easily replaced for others depending on costs or functionality. We describe some of the early results and experiences we provide in this setup.
我们提出了一个集成的界面,利用几种虚拟现实技术来探索博物馆里的小文物。这个界面允许游客从多个角度观察3D作品,触摸它们,感受它们的重量,并在触摸时听到它们的声音。一方面,它将允许游客更近距离地观察文物,以便更多地了解选定的作品,并提高他们在博物馆的整体体验。另一方面,它利用现有技术来提供多模式界面,可以根据成本或功能轻松替换。我们描述了在此设置中提供的一些早期结果和经验。
{"title":"A Multimodal Interface for Artifact's Exploration","authors":"P. Figueroa, J. Borda, Diego Restrepo, P. Boulanger, Eduardo Londoño, F. Prieto","doi":"10.1109/VR.2009.4811054","DOIUrl":"https://doi.org/10.1109/VR.2009.4811054","url":null,"abstract":"We present an integrated interface that takes advantage of several VR technologies for the exploration of small artifacts in a Museum. This interface allows visitors to observe pieces in 3D from several viewpoints, touch them, feel their weight, and hear their sound at touch. From one hand, it will allow visitors to observe artifacts closer, in order to know more about selected pieces and to enhance their overall experience in a Museum. From the other hand, it lever-ages existing technologies in order to provide a multimodal interface, which can be easily replaced for others depending on costs or functionality. We describe some of the early results and experiences we provide in this setup.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130891939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2009 IEEE Virtual Reality Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1