首页 > 最新文献

IEEE Virtual Reality 2004最新文献

英文 中文
Food simulator: a haptic interface for biting 食物模拟器:一个咀嚼的触觉界面
Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.40
Hiroo Iwata, H. Yano, Takahiro Uemura, Tetsuro Moriya
The food simulator is a haptic interface that presents biting force. The taste of food arises from a combination of chemical, auditory, olfactory and haptic sensation. Haptic sensation while eating has been an ongoing problem in taste display. The food simulator generates a force on the user's teeth as an indication of food texture. The device is composed of four linkages. The mechanical configuration of the device is designed such that it will fit into the mouth, with a force sensor attached to the end effector. The food simulator generates a force representing the force profile captured from the mouth of a person biting real food. The device has been integrated with auditory and chemical display for multi-modal sensations in a taste the food simulator has been tested on a large number of participants. The results indicate that the device has succeeded in presenting food texture as well as chemical taste.
食物模拟器是一个可以显示咬合力的触觉界面。食物的味道是化学、听觉、嗅觉和触觉共同作用的结果。吃东西时的触觉一直是味觉显示的一个持续存在的问题。食物模拟器在使用者的牙齿上产生一种力,作为食物质地的指示。该装置由四个连杆机构组成。该装置的机械结构是这样设计的,它将适合嘴巴,力传感器连接到末端执行器。食物模拟器产生一种力,表示从一个人咬真正食物的嘴里捕捉到的力的轮廓。该设备集成了听觉和化学显示,用于味觉中的多模态感觉。食物模拟器已经在大量参与者身上进行了测试。结果表明,该装置成功地呈现了食物的质地和化学味道。
{"title":"Food simulator: a haptic interface for biting","authors":"Hiroo Iwata, H. Yano, Takahiro Uemura, Tetsuro Moriya","doi":"10.1109/VR.2004.40","DOIUrl":"https://doi.org/10.1109/VR.2004.40","url":null,"abstract":"The food simulator is a haptic interface that presents biting force. The taste of food arises from a combination of chemical, auditory, olfactory and haptic sensation. Haptic sensation while eating has been an ongoing problem in taste display. The food simulator generates a force on the user's teeth as an indication of food texture. The device is composed of four linkages. The mechanical configuration of the device is designed such that it will fit into the mouth, with a force sensor attached to the end effector. The food simulator generates a force representing the force profile captured from the mouth of a person biting real food. The device has been integrated with auditory and chemical display for multi-modal sensations in a taste the food simulator has been tested on a large number of participants. The results indicate that the device has succeeded in presenting food texture as well as chemical taste.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129723069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 96
Projection based olfactory display with nose tracking 基于投影的鼻子跟踪嗅觉显示
Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.62
Y. Yanagida, S. Kawato, H. Noma, A. Tomono, N. Tetsutani
Most attempts to realize an olfactory display have involved capturing and synthesizing the odor, processes that still pose many challenging problems. These difficulties are mainly due to the mechanism of human olfaction, in which a set of so-called "primary odors" has not been found. Instead, we focus on spatio-temporal control of odor rather than synthesizing odor itself. Many existing interactive olfactory displays simply diffuse the scent into the air, which does not provide the ability of spatio-temporal control of olfaction. Recently, however, several researchers have developed olfactory displays that inject scented air under the nose through tubes. On the analogy of visual displays, these systems correspond to head-mounted displays (HMD). These yield a solid way to achieve spatio-temporal control of olfactory space, but they require the user to wear something on his or her face. Here, we propose an unencumbering olfactory display that does not require the user to attach anything on the face. It works by projecting a clump of scented air from a location near the user's nose through free space. We also aim to display a scent to the restricted space around a specific user's nose, rather than scattering scented air by simply diffusing it into the atmosphere. To implement this concept, we used an "air cannon" that generates toroidal vortices of the scented air. We conducted a preliminary experiment to examine this method's ability to display scent to a restricted space. The results show that we could successfully display incense to the target user. Next, we constructed prototype systems. We could successfully bring the scented air to a specific user by tracking the nose position of the user and controlling the orientation of the air cannon to the user's nose.
大多数实现嗅觉显示的尝试都涉及捕捉和合成气味,这一过程仍然存在许多具有挑战性的问题。这些困难主要是由于人类嗅觉的机制,其中一组所谓的“初级气味”尚未被发现。相反,我们关注的是气味的时空控制,而不是合成气味本身。许多现有的交互式嗅觉显示器只是将气味扩散到空气中,不提供嗅觉的时空控制能力。然而,最近,一些研究人员已经开发出嗅觉显示器,通过管道将有气味的空气注入鼻子下。在视觉显示器的类比上,这些系统对应于头戴式显示器(HMD)。这为实现嗅觉空间的时空控制提供了一种可靠的方法,但它们需要用户在他或她的脸上戴上东西。在这里,我们提出了一种无阻碍的嗅觉显示,不需要用户在脸上附加任何东西。它的工作原理是从用户鼻子附近的一个位置通过自由空间投射一团有气味的空气。我们还旨在将气味展示给特定用户鼻子周围的有限空间,而不是简单地将气味扩散到大气中。为了实现这个概念,我们使用了一个“空气炮”,它产生了气味空气的环形漩涡。我们进行了一个初步的实验来检验这种方法在有限空间内显示气味的能力。结果表明,我们可以成功地向目标用户展示香。接下来,我们构建原型系统。我们可以通过跟踪用户鼻子的位置,控制空气炮对准用户鼻子的方向,成功地将气味带到特定用户身上。
{"title":"Projection based olfactory display with nose tracking","authors":"Y. Yanagida, S. Kawato, H. Noma, A. Tomono, N. Tetsutani","doi":"10.1109/VR.2004.62","DOIUrl":"https://doi.org/10.1109/VR.2004.62","url":null,"abstract":"Most attempts to realize an olfactory display have involved capturing and synthesizing the odor, processes that still pose many challenging problems. These difficulties are mainly due to the mechanism of human olfaction, in which a set of so-called \"primary odors\" has not been found. Instead, we focus on spatio-temporal control of odor rather than synthesizing odor itself. Many existing interactive olfactory displays simply diffuse the scent into the air, which does not provide the ability of spatio-temporal control of olfaction. Recently, however, several researchers have developed olfactory displays that inject scented air under the nose through tubes. On the analogy of visual displays, these systems correspond to head-mounted displays (HMD). These yield a solid way to achieve spatio-temporal control of olfactory space, but they require the user to wear something on his or her face. Here, we propose an unencumbering olfactory display that does not require the user to attach anything on the face. It works by projecting a clump of scented air from a location near the user's nose through free space. We also aim to display a scent to the restricted space around a specific user's nose, rather than scattering scented air by simply diffusing it into the atmosphere. To implement this concept, we used an \"air cannon\" that generates toroidal vortices of the scented air. We conducted a preliminary experiment to examine this method's ability to display scent to a restricted space. The results show that we could successfully display incense to the target user. Next, we constructed prototype systems. We could successfully bring the scented air to a specific user by tracking the nose position of the user and controlling the orientation of the air cannon to the user's nose.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116547870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 153
Efficient, intuitive user interfaces for classroom-based immersive virtual environments 高效,直观的用户界面,以教室为基础的沉浸式虚拟环境
Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.38
D. Bowman, M. Gracey, John F. Lucas
The educational benefits of immersive virtual environments (VEs) have long been touted, but very few immersive VEs have been used in a classroom setting. We have developed three educational VEs and deployed them in university courses. A key element in the success of these applications is a simple but powerful user interface (UI) that requires no training, yet allows students to interact with the virtual world in meaningful ways. We discuss the design of this UI and the results of an evaluation of its usability in university classrooms.
沉浸式虚拟环境(ve)的教育效益一直被吹捧,但很少有沉浸式虚拟环境在课堂环境中使用。我们开发了三种教育性的虚拟企业,并在大学课程中使用。这些应用程序成功的一个关键因素是一个简单但功能强大的用户界面(UI),不需要培训,但允许学生以有意义的方式与虚拟世界进行交互。我们讨论了该用户界面的设计及其在大学课堂中的可用性评估结果。
{"title":"Efficient, intuitive user interfaces for classroom-based immersive virtual environments","authors":"D. Bowman, M. Gracey, John F. Lucas","doi":"10.1109/VR.2004.38","DOIUrl":"https://doi.org/10.1109/VR.2004.38","url":null,"abstract":"The educational benefits of immersive virtual environments (VEs) have long been touted, but very few immersive VEs have been used in a classroom setting. We have developed three educational VEs and deployed them in university courses. A key element in the success of these applications is a simple but powerful user interface (UI) that requires no training, yet allows students to interact with the virtual world in meaningful ways. We discuss the design of this UI and the results of an evaluation of its usability in university classrooms.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132200735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Increasing the effective egocentric field of view with proprioceptive and tactile feedback 通过本体感觉和触觉反馈增加有效的自我中心视野
Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.44
Ungyeon Yang, G. Kim
Multimodality often exhibits synergistic effects: each modality compliments and compensates for other modalities in transferring coherent, unambiguous, and enriched information for higher interaction efficiency and improved sense of presence. In this paper, we explore one such phenomenon: a positive interaction among the geometric field of view, proprioceptive interaction, and tactile feedback. We hypothesize that, with proprioceptive interaction and tactile feedback, the geometric field of view and thus visibility can be increased such that it is larger than the physical field of view, without causing a significant distortion in the user's distance perception. This, in turn, would further help operation of the overall multimodal interaction scheme as the user is more likely to receive the multimodal feedback simultaneously. We tested our hypothesis with an experiment to measure the user's change in distance perception according to different values of egocentric geometric field of view and feedback conditions. Our experimental results have shown that, when coupled with physical interaction, the GFOV could be increased by up to 170 percent of the physical field of view without introducing significant distortion in distance perception. Second, when tactile feedback was introduced, in addition to visual and proprioceptive cues, the GFOV could be increased by up to 200 percent. The results offer a useful guideline for effectively utilizing of modality compensation and building multimodal interfaces for close range spatial tasks in virtual environments. In addition, it demonstrates one way to overcome the shortcomings of the narrow (physical) fields of views of most contemporary HMDs.
多模态通常表现出协同效应:在传递连贯、明确和丰富的信息方面,每一种模态都是对其他模态的补充和补偿,从而提高交互效率和存在感。在本文中,我们探讨了这样一种现象:几何视野、本体感觉相互作用和触觉反馈之间的积极相互作用。我们假设,通过本体感觉交互和触觉反馈,几何视野和可见性可以增加,使其大于物理视野,而不会对用户的距离感知造成明显的扭曲。这反过来又会进一步帮助整个多模态交互方案的运行,因为用户更有可能同时接收到多模态反馈。我们通过实验来验证我们的假设,根据不同的自我中心几何视场值和反馈条件来测量用户的距离感知变化。我们的实验结果表明,当与物理相互作用相结合时,GFOV可以增加到物理视场的170%,而不会在距离感知中引入明显的扭曲。其次,当引入触觉反馈时,除了视觉和本体感觉提示外,GFOV可以增加到200%。研究结果为虚拟环境中近距离空间任务有效利用模态补偿和构建多模态接口提供了有益的指导。此外,它还展示了一种克服大多数现代hmd狭窄(物理)视场的缺点的方法。
{"title":"Increasing the effective egocentric field of view with proprioceptive and tactile feedback","authors":"Ungyeon Yang, G. Kim","doi":"10.1109/VR.2004.44","DOIUrl":"https://doi.org/10.1109/VR.2004.44","url":null,"abstract":"Multimodality often exhibits synergistic effects: each modality compliments and compensates for other modalities in transferring coherent, unambiguous, and enriched information for higher interaction efficiency and improved sense of presence. In this paper, we explore one such phenomenon: a positive interaction among the geometric field of view, proprioceptive interaction, and tactile feedback. We hypothesize that, with proprioceptive interaction and tactile feedback, the geometric field of view and thus visibility can be increased such that it is larger than the physical field of view, without causing a significant distortion in the user's distance perception. This, in turn, would further help operation of the overall multimodal interaction scheme as the user is more likely to receive the multimodal feedback simultaneously. We tested our hypothesis with an experiment to measure the user's change in distance perception according to different values of egocentric geometric field of view and feedback conditions. Our experimental results have shown that, when coupled with physical interaction, the GFOV could be increased by up to 170 percent of the physical field of view without introducing significant distortion in distance perception. Second, when tactile feedback was introduced, in addition to visual and proprioceptive cues, the GFOV could be increased by up to 200 percent. The results offer a useful guideline for effectively utilizing of modality compensation and building multimodal interfaces for close range spatial tasks in virtual environments. In addition, it demonstrates one way to overcome the shortcomings of the narrow (physical) fields of views of most contemporary HMDs.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124044787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Adaptive scene synchronization for virtual and mixed reality environments 虚拟和混合现实环境的自适应场景同步
Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.9
Felix G. Hamza-Lup, J. Rolland
Technological advances in virtual environments facilitate the creation of distributed collaborative environments, in which the distribution of three-dimensional content at remote locations allows efficient and effective communication of ideas. One of the challenges in distributed shared environments is maintaining a consistent view of the shared information, in the presence of inevitable network delays and variable bandwidth. A consistent view in a shared 3D scene may significantly increase the sense of presence among participants and improve their interactivity. This paper introduces an adaptive scene synchronization algorithm and a framework for integration of the algorithm in a distributed real-time virtual environment. In spite of significant network delays, results show that objects can be synchronous in their viewpoint at multiple remotely located sites. Furthermore residual asynchronicity is quantified as a function of network delays and scalability.
虚拟环境中的技术进步促进了分布式协作环境的创建,在这种环境中,在远程位置分发三维内容可以实现高效和有效的思想交流。分布式共享环境中的挑战之一是在不可避免的网络延迟和可变带宽存在的情况下保持共享信息的一致视图。在共享的3D场景中,一致的视图可以显著增加参与者之间的存在感,并改善他们的交互性。本文介绍了一种自适应场景同步算法,并给出了该算法在分布式实时虚拟环境中的集成框架。尽管存在明显的网络延迟,但结果表明,在多个远程站点上,目标可以在其视点上同步。此外,将剩余异步性量化为网络延迟和可扩展性的函数。
{"title":"Adaptive scene synchronization for virtual and mixed reality environments","authors":"Felix G. Hamza-Lup, J. Rolland","doi":"10.1109/VR.2004.9","DOIUrl":"https://doi.org/10.1109/VR.2004.9","url":null,"abstract":"Technological advances in virtual environments facilitate the creation of distributed collaborative environments, in which the distribution of three-dimensional content at remote locations allows efficient and effective communication of ideas. One of the challenges in distributed shared environments is maintaining a consistent view of the shared information, in the presence of inevitable network delays and variable bandwidth. A consistent view in a shared 3D scene may significantly increase the sense of presence among participants and improve their interactivity. This paper introduces an adaptive scene synchronization algorithm and a framework for integration of the algorithm in a distributed real-time virtual environment. In spite of significant network delays, results show that objects can be synchronous in their viewpoint at multiple remotely located sites. Furthermore residual asynchronicity is quantified as a function of network delays and scalability.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129322113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Improving collision detection in distributed virtual environments by adaptive collision prediction tracking 通过自适应碰撞预测跟踪改进分布式虚拟环境中的碰撞检测
Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.43
Jan Ohlenburg
Collision detection for dynamic objects in distributed virtual environments is still an open research topic. The problems of network latency and available network bandwidth prevent exact common solutions. The consistency-throughput tradeoff states that a distributed virtual environment cannot be consistent and highly dynamic at the same time. Remote object visualization is used to extrapolate and predict the movement of remote objects reducing the bandwidth required for good approximations of the remote objects. Few update messages aggravate the effect of network latency for collision detection. In this paper, new approach extending remote object visualization techniques is demonstrated to improve the results of collision detection in distributed virtual environments. We showed how this can significantly reduce the approximation errors caused by remote object visualization techniques. This is done by predicting collisions between remote objects and adaptively changing the parameters of these techniques.
分布式虚拟环境中动态对象的碰撞检测仍然是一个开放的研究课题。网络延迟和可用网络带宽的问题阻碍了精确的通用解决方案。一致性和吞吐量的权衡表明,分布式虚拟环境不能同时保持一致性和高度动态。远程对象可视化用于外推和预测远程对象的运动,减少了对远程对象进行良好近似所需的带宽。更新消息少加剧了网络延迟对碰撞检测的影响。本文展示了一种扩展远程对象可视化技术的新方法,以改善分布式虚拟环境中的碰撞检测结果。我们展示了这如何显著减少由远程对象可视化技术引起的近似误差。这是通过预测远程物体之间的碰撞并自适应地改变这些技术的参数来实现的。
{"title":"Improving collision detection in distributed virtual environments by adaptive collision prediction tracking","authors":"Jan Ohlenburg","doi":"10.1109/VR.2004.43","DOIUrl":"https://doi.org/10.1109/VR.2004.43","url":null,"abstract":"Collision detection for dynamic objects in distributed virtual environments is still an open research topic. The problems of network latency and available network bandwidth prevent exact common solutions. The consistency-throughput tradeoff states that a distributed virtual environment cannot be consistent and highly dynamic at the same time. Remote object visualization is used to extrapolate and predict the movement of remote objects reducing the bandwidth required for good approximations of the remote objects. Few update messages aggravate the effect of network latency for collision detection. In this paper, new approach extending remote object visualization techniques is demonstrated to improve the results of collision detection in distributed virtual environments. We showed how this can significantly reduce the approximation errors caused by remote object visualization techniques. This is done by predicting collisions between remote objects and adaptively changing the parameters of these techniques.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114829978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Creating VR scenes using fully automatic derivation of motion vectors 创建VR场景使用运动矢量的全自动衍生
Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.36
Kensuke Habuka, Y. Shinagawa
We propose a new method to create smooth VR scenes using a limited number of images and the motion vectors among them. We will discuss two specific components to simulate a majority of VR scenes: MV VR Object and MV VR Panorama. They provide similar functions to QuickTime VR Object, and QuickTime VR Panorama (Chen and Williams, 1993). However, our method can interpolate between the existing images, and therefore, smooth movement of viewpoints is achieved. When we look at a primitive from arbitrary viewpoints, the images of an object associated with the primitive is transformed according to the motion vectors and to the location of the viewpoint.
我们提出了一种使用有限数量的图像和其中的运动向量来创建平滑VR场景的新方法。我们将讨论两个特定的组件来模拟大多数VR场景:MV VR对象和MV VR全景。它们提供了与QuickTime VR Object和QuickTime VR Panorama类似的功能(Chen和Williams, 1993)。然而,我们的方法可以在现有图像之间进行插值,从而实现视点的平滑移动。当我们从任意视点观察一个原语时,与该原语相关的物体的图像将根据运动向量和视点的位置进行转换。
{"title":"Creating VR scenes using fully automatic derivation of motion vectors","authors":"Kensuke Habuka, Y. Shinagawa","doi":"10.1109/VR.2004.36","DOIUrl":"https://doi.org/10.1109/VR.2004.36","url":null,"abstract":"We propose a new method to create smooth VR scenes using a limited number of images and the motion vectors among them. We will discuss two specific components to simulate a majority of VR scenes: MV VR Object and MV VR Panorama. They provide similar functions to QuickTime VR Object, and QuickTime VR Panorama (Chen and Williams, 1993). However, our method can interpolate between the existing images, and therefore, smooth movement of viewpoints is achieved. When we look at a primitive from arbitrary viewpoints, the images of an object associated with the primitive is transformed according to the motion vectors and to the location of the viewpoint.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125666040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Resolving object references in multimodal dialogues for immersive virtual environments 解决沉浸式虚拟环境中多模态对话中的对象引用
Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.67
Thies Pfeiffer, Marc Erich Latoschik
This paper describes the underlying concepts and the technical implementation of a system for resolving multi-modal references in virtual reality (VR). In this system the temporal and semantic relations intrinsic to referential utterances are expressed as a constraint satisfaction problem, where the propositional value of each referential unit during a multimodal dialogue updates incrementally the active set of constraints. As the system is based on findings of human cognition research it also regards, e.g., constraints implicitly assumed by human communicators. The implementation takes VR related real-time and immersive conditions into account and adapts its architecture to well known scene-graph based design patterns by introducing a so-called reference resolution engine. Regarding the conceptual work as well as regarding the implementation, special care has been taken to allow further refinements and modifications to the underlying resolving processes on a high level basis.
本文描述了虚拟现实(VR)中多模态引用解析系统的基本概念和技术实现。在该系统中,参考话语固有的时间和语义关系被表示为约束满足问题,其中在多模态对话中每个参考单元的命题值增量更新活动约束集。由于该系统是基于人类认知研究的结果,它也考虑到,例如,人类沟通者隐含的约束。该实现将VR相关的实时和沉浸式条件考虑在内,并通过引入所谓的参考分辨率引擎,使其架构适应众所周知的基于场景图的设计模式。关于概念性工作和执行工作,已特别注意允许在高层次上进一步改进和修改基本的解决进程。
{"title":"Resolving object references in multimodal dialogues for immersive virtual environments","authors":"Thies Pfeiffer, Marc Erich Latoschik","doi":"10.1109/VR.2004.67","DOIUrl":"https://doi.org/10.1109/VR.2004.67","url":null,"abstract":"This paper describes the underlying concepts and the technical implementation of a system for resolving multi-modal references in virtual reality (VR). In this system the temporal and semantic relations intrinsic to referential utterances are expressed as a constraint satisfaction problem, where the propositional value of each referential unit during a multimodal dialogue updates incrementally the active set of constraints. As the system is based on findings of human cognition research it also regards, e.g., constraints implicitly assumed by human communicators. The implementation takes VR related real-time and immersive conditions into account and adapts its architecture to well known scene-graph based design patterns by introducing a so-called reference resolution engine. Regarding the conceptual work as well as regarding the implementation, special care has been taken to allow further refinements and modifications to the underlying resolving processes on a high level basis.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115053492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Co-location and tactile feedback for 2D widget manipulation 2D小部件操作的协同定位和触觉反馈
Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.14
A. Kok, R. V. Liere
This study investigated the effect of co-location and tactile feedback on 2D widget manipulation tasks in virtual environments. Task completion time and positioning accuracy during each task were measured for subjects under 4 situations (co-located vs no co-located and tactile feedback vs no tactile feedback). Performance results indicate that co-location and tactile feedback both significantly improve the performance of 2D widget manipulation in 3D virtual environments. Subjective results support these findings.
本研究探讨了虚拟环境下共定位和触觉反馈对2D小部件操作任务的影响。在4种情况下(同地与无同地、触觉反馈与无触觉反馈),测量受试者在每个任务中的任务完成时间和定位精度。性能结果表明,共定位和触觉反馈都显著提高了三维虚拟环境中2D小部件的操作性能。主观结果支持这些发现。
{"title":"Co-location and tactile feedback for 2D widget manipulation","authors":"A. Kok, R. V. Liere","doi":"10.1109/VR.2004.14","DOIUrl":"https://doi.org/10.1109/VR.2004.14","url":null,"abstract":"This study investigated the effect of co-location and tactile feedback on 2D widget manipulation tasks in virtual environments. Task completion time and positioning accuracy during each task were measured for subjects under 4 situations (co-located vs no co-located and tactile feedback vs no tactile feedback). Performance results indicate that co-location and tactile feedback both significantly improve the performance of 2D widget manipulation in 3D virtual environments. Subjective results support these findings.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129810878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Testbed evaluation of navigation and text display techniques in an information-rich virtual environment 信息丰富的虚拟环境中导航和文本显示技术的试验台评估
Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.73
Jian Chen, P. Pyla, Doug A. Bowman
The fundamental question for an information-rich virtual environment is how to access and display abstract information. We investigated two existing navigation techniques: hand-centered object manipulation extending ray-casting (HOMER) and go-go navigation, and two text layout techniques: within-the-world display (WWD) and heads-up display (HUD). Four search tasks were performed to measure participants' performance in a densely packed environment. HUD enabled significantly better performance than WWD and the go-go technique enabled better performance than the HOMER technique for most of the tasks. We found that using HOMER navigation combined with the WWD technique was significantly worse than other combinations for difficult naive search tasks. Users also preferred the combination of go-go and HUD for all tasks.
信息丰富的虚拟环境的基本问题是如何访问和显示抽象信息。我们研究了两种现有的导航技术:以手为中心的物体操作扩展光线投射(HOMER)和go-go导航,以及两种文本布局技术:世界内显示(WWD)和头视显示(HUD)。研究人员执行了四项搜索任务,以衡量参与者在密集环境中的表现。在大多数任务中,HUD的性能明显优于WWD, go-go技术的性能优于HOMER技术。我们发现,对于复杂的朴素搜索任务,使用荷马导航与WWD技术相结合的效果明显差于其他组合。用户也更喜欢go-go和HUD的组合。
{"title":"Testbed evaluation of navigation and text display techniques in an information-rich virtual environment","authors":"Jian Chen, P. Pyla, Doug A. Bowman","doi":"10.1109/VR.2004.73","DOIUrl":"https://doi.org/10.1109/VR.2004.73","url":null,"abstract":"The fundamental question for an information-rich virtual environment is how to access and display abstract information. We investigated two existing navigation techniques: hand-centered object manipulation extending ray-casting (HOMER) and go-go navigation, and two text layout techniques: within-the-world display (WWD) and heads-up display (HUD). Four search tasks were performed to measure participants' performance in a densely packed environment. HUD enabled significantly better performance than WWD and the go-go technique enabled better performance than the HOMER technique for most of the tasks. We found that using HOMER navigation combined with the WWD technique was significantly worse than other combinations for difficult naive search tasks. Users also preferred the combination of go-go and HUD for all tasks.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114212010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
IEEE Virtual Reality 2004
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1