首页 > 最新文献

2011 IEEE International Symposium on VR Innovation最新文献

英文 中文
Use of interactive Virtual Prototypes to define product design specifications: A pilot study on consumer products 使用交互式虚拟原型来定义产品设计规范:一项针对消费产品的试点研究
Pub Date : 2011-03-19 DOI: 10.1109/ISVRI.2011.5759592
M. Bordegoni, F. Ferrise, Joseba Lizaranzu
Virtual Prototyping (VP) aims at substituting physical prototypes currently used in the industrial design practice with their virtual replica. The ultimate goal of VP is reducing the cost and time necessary to implement and test different design solutions. The paper describes a pilot study that aims at understanding how interactive Virtual Prototypes (iVPs) of consumer products (where interaction is based on the combination of haptic, sound and 3D visualization technologies) would allow us to design the interaction parameters that concur in creating the first impression of the products that customers have when interacting with them. We have selected two commercially available products and, once created the corresponding virtual replica, we have first checked the fidelity of the iVPs by comparing them with the corresponding real products, when used to perform the same activities. Then, differently from the traditional use of Virtual Prototypes for product design evaluation, we have used them for haptic interaction design, i.e. as a means to define some design variables used for the specification of new products: variations are applied to iVP haptic parameters so as to test with final users their preferences concerning the haptic interaction with a simulated product. The iVP configuration that users liked most has then been used for the definition of the specifications for the design of the new product.
虚拟样机(VP)旨在用其虚拟复制品取代目前在工业设计实践中使用的物理样机。VP的最终目标是减少实现和测试不同设计解决方案所需的成本和时间。本文描述了一项试点研究,旨在了解消费者产品的交互式虚拟原型(iVPs)(其中交互基于触觉,声音和3D可视化技术的组合)将如何使我们能够设计交互参数,这些参数在与客户交互时产生对产品的第一印象。我们选择了两个商业上可用的产品,一旦创建了相应的虚拟复制品,我们首先通过将其与相应的真实产品进行比较来检查ivp的保真度,当用于执行相同的活动时。然后,与传统使用虚拟样机进行产品设计评估不同,我们将其用于触觉交互设计,即作为一种定义一些用于新产品规格的设计变量的手段:对虚拟样机的触觉参数进行变化,以测试最终用户对模拟产品的触觉交互的偏好。用户最喜欢的iVP配置随后被用于定义新产品设计的规格。
{"title":"Use of interactive Virtual Prototypes to define product design specifications: A pilot study on consumer products","authors":"M. Bordegoni, F. Ferrise, Joseba Lizaranzu","doi":"10.1109/ISVRI.2011.5759592","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759592","url":null,"abstract":"Virtual Prototyping (VP) aims at substituting physical prototypes currently used in the industrial design practice with their virtual replica. The ultimate goal of VP is reducing the cost and time necessary to implement and test different design solutions. The paper describes a pilot study that aims at understanding how interactive Virtual Prototypes (iVPs) of consumer products (where interaction is based on the combination of haptic, sound and 3D visualization technologies) would allow us to design the interaction parameters that concur in creating the first impression of the products that customers have when interacting with them. We have selected two commercially available products and, once created the corresponding virtual replica, we have first checked the fidelity of the iVPs by comparing them with the corresponding real products, when used to perform the same activities. Then, differently from the traditional use of Virtual Prototypes for product design evaluation, we have used them for haptic interaction design, i.e. as a means to define some design variables used for the specification of new products: variations are applied to iVP haptic parameters so as to test with final users their preferences concerning the haptic interaction with a simulated product. The iVP configuration that users liked most has then been used for the definition of the specifications for the design of the new product.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134221687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Empirical evaluation of augmented information presentation on small form factors - navigation assistant scenario 小形状因素下增强信息呈现的实证评价——导航辅助方案
Pub Date : 2011-03-19 DOI: 10.1109/ISVRI.2011.5759606
Subhashini Ganapathy, Glen J. Anderson, I. Kozintsev
Mobile Augmented Reality (MAR) enabled devices will have the capability to present a large amount of information in real time, based on sensors that determine proximity, visual reference, maps, and detailed information on the environment. This poses a challenge of presenting the information such that there is no cognitive overload for the user and the augmented information that is presented is useful and meaningful to the user. This study examined the user tolerance and identified acceptable values for the performance characteristics of the augmented information presented on - density of information, accuracy of information, delay in information presentation, and error rate. Results indicate that the amount of information presented depends on the type of activity that the user is interested in. For example, in the case of density of information - participants were interested in seeing about 7 items identified at a time. With 11 items, most were overwhelmed, but 4 items were not enough. However, desired information density depends on the information shown, and the participants wanted to control the type of information shown. The findings of the study can be used as design guidelines for MAR information overlay on small screens.
支持移动增强现实(MAR)的设备将能够基于传感器实时呈现大量信息,这些传感器可以确定距离、视觉参考、地图和环境的详细信息。这就对呈现信息提出了挑战,即不能让用户产生认知超载,而且呈现的增强信息对用户来说是有用和有意义的。本研究考察了用户的容忍度,并确定了增强信息在信息密度、信息准确性、信息呈现延迟和错误率等方面的性能特征的可接受值。结果表明,呈现的信息量取决于用户感兴趣的活动类型。例如,在信息密度的情况下,参与者对一次识别大约7个项目感兴趣。有11件物品,大部分都不堪重负,但4件物品还不够。然而,所需的信息密度取决于所显示的信息,而参与者希望控制所显示信息的类型。研究结果可以作为小屏幕上MAR信息覆盖的设计指南。
{"title":"Empirical evaluation of augmented information presentation on small form factors - navigation assistant scenario","authors":"Subhashini Ganapathy, Glen J. Anderson, I. Kozintsev","doi":"10.1109/ISVRI.2011.5759606","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759606","url":null,"abstract":"Mobile Augmented Reality (MAR) enabled devices will have the capability to present a large amount of information in real time, based on sensors that determine proximity, visual reference, maps, and detailed information on the environment. This poses a challenge of presenting the information such that there is no cognitive overload for the user and the augmented information that is presented is useful and meaningful to the user. This study examined the user tolerance and identified acceptable values for the performance characteristics of the augmented information presented on - density of information, accuracy of information, delay in information presentation, and error rate. Results indicate that the amount of information presented depends on the type of activity that the user is interested in. For example, in the case of density of information - participants were interested in seeing about 7 items identified at a time. With 11 items, most were overwhelmed, but 4 items were not enough. However, desired information density depends on the information shown, and the participants wanted to control the type of information shown. The findings of the study can be used as design guidelines for MAR information overlay on small screens.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"185 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116382920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
A simplified vibrotactile navigation system for sightseeing 一种简化的观光振动触觉导航系统
Pub Date : 2011-03-19 DOI: 10.1109/ISVRI.2011.5759597
Yuji Tamiya, T. Nojima
We propose a new sightseeing support system that allows users to focus on environmental information at tourist sites. The main aim of our project is to enable users to recognize the physical positional relation between their current position and their destination. The user moves our device in a 360-degree circle around his body to perceive direction and distance to the destination through the sense of touch. When pointed towards the destination, our system enables the user to estimate arrival time through simple information provided by the device. Furthermore, because the system does not hinder the user's vision or hearing, from the aspect of sightseeing and safety, our approach advances tourism. In this paper, we evaluate an information presentation method that uses vibration to provide the direction and distance to the destination. We also show the results of a navigation experiment using our system.
我们提出了一个新的观光支持系统,使用户能够关注旅游景点的环境信息。我们项目的主要目的是让用户能够识别他们当前位置和目的地之间的物理位置关系。用户将我们的设备绕着自己的身体做360度的圆周运动,通过触觉感知到目的地的方向和距离。当指向目的地时,我们的系统使用户能够通过设备提供的简单信息来估计到达时间。此外,由于该系统不妨碍用户的视觉和听觉,从观光和安全的角度来看,我们的方法促进了旅游业。在本文中,我们评估了一种利用振动来提供到目的地的方向和距离的信息表示方法。我们还展示了使用我们的系统进行导航实验的结果。
{"title":"A simplified vibrotactile navigation system for sightseeing","authors":"Yuji Tamiya, T. Nojima","doi":"10.1109/ISVRI.2011.5759597","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759597","url":null,"abstract":"We propose a new sightseeing support system that allows users to focus on environmental information at tourist sites. The main aim of our project is to enable users to recognize the physical positional relation between their current position and their destination. The user moves our device in a 360-degree circle around his body to perceive direction and distance to the destination through the sense of touch. When pointed towards the destination, our system enables the user to estimate arrival time through simple information provided by the device. Furthermore, because the system does not hinder the user's vision or hearing, from the aspect of sightseeing and safety, our approach advances tourism. In this paper, we evaluate an information presentation method that uses vibration to provide the direction and distance to the destination. We also show the results of a navigation experiment using our system.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122605777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Camera tracking using partially modeled 3-D objects with scene textures 相机跟踪使用部分建模的三维物体与场景纹理
Pub Date : 2011-03-19 DOI: 10.1109/ISVRI.2011.5759654
Byung-Kuk Seo, Jong-Il Park, Hanhoon Park
This paper presents an efficient camera tracking using prior knowledge of a target scene—3-D object models with scene textures. The camera tracking uses partially modeled 3-D objects instead of complete and delicate modeling, which is not easy in complex scenes with a variety of 3-D objects. For robust and accurate camera tracking, scene textures are also sparsely modeled, and they support reducing the uncertainty of camera poses; handling partial occlusions of visual cues; initializing and recovering the camera tracking. The effectiveness is verified by demonstrating its performance using various scenes.
本文提出了一种利用目标场景先验知识的高效摄像机跟踪方法——具有场景纹理的三维物体模型。摄像机跟踪采用的是部分建模的三维物体,而不是完整细致的建模,这在三维物体种类繁多的复杂场景中并不容易实现。为了鲁棒和准确的相机跟踪,场景纹理也稀疏建模,他们支持减少相机姿势的不确定性;处理视觉线索的部分遮挡;初始化和恢复摄像头跟踪。通过不同场景的演示,验证了该算法的有效性。
{"title":"Camera tracking using partially modeled 3-D objects with scene textures","authors":"Byung-Kuk Seo, Jong-Il Park, Hanhoon Park","doi":"10.1109/ISVRI.2011.5759654","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759654","url":null,"abstract":"This paper presents an efficient camera tracking using prior knowledge of a target scene—3-D object models with scene textures. The camera tracking uses partially modeled 3-D objects instead of complete and delicate modeling, which is not easy in complex scenes with a variety of 3-D objects. For robust and accurate camera tracking, scene textures are also sparsely modeled, and they support reducing the uncertainty of camera poses; handling partial occlusions of visual cues; initializing and recovering the camera tracking. The effectiveness is verified by demonstrating its performance using various scenes.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127334819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
From whereware to whence- and whitherware: Augmented audio reality for position-aware services 从哪里到哪里,从哪里到哪里:位置感知服务的增强音频现实
Pub Date : 2011-03-19 DOI: 10.1109/ISVRI.2011.5759650
Michael Cohen, J. Villegas
Since audition is omnidirectional, it is especially receptive to orientation modulation. Position can be defined as the combination of location and orientation information. Location-based or location-aware services do not generally require orientation information, but position-based services are explicitly parameterized by angular bearing as well as place. “Whereware” [7] suggests using hyperlocal georeferences to allow applications location-awareness; “whence- and whitherware” suggests the potential of position-awareness to enhance navigation and situation awareness, especially in realtime high-definition communication interfaces, such as spatial sound augmented reality applications. Combining literal direction effects and metaphorical (remapped) distance effects in whence- and whitherware position-aware applications invites oversaturation of interface channels, encouraging interface strategies such as audio windowing, narrowcasting, and multipresence.
由于听觉是全方位的,它特别容易接受方向调制。位置可以定义为位置和方向信息的组合。基于位置或位置感知的服务通常不需要方向信息,但基于位置的服务通过角度方位和位置显式地参数化。“Whereware”[7]建议使用超本地地理引用来允许应用程序位置感知;“从哪里到哪里”表明了位置感知的潜力,以增强导航和态势感知,特别是在实时高清通信接口中,如空间声音增强现实应用。在何处和何处位置感知应用程序中结合文字方向效果和隐喻(重新映射)距离效果会导致界面通道过饱和,鼓励界面策略,如音频窗口、窄播和多在场。
{"title":"From whereware to whence- and whitherware: Augmented audio reality for position-aware services","authors":"Michael Cohen, J. Villegas","doi":"10.1109/ISVRI.2011.5759650","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759650","url":null,"abstract":"Since audition is omnidirectional, it is especially receptive to orientation modulation. Position can be defined as the combination of location and orientation information. Location-based or location-aware services do not generally require orientation information, but position-based services are explicitly parameterized by angular bearing as well as place. “Whereware” [7] suggests using hyperlocal georeferences to allow applications location-awareness; “whence- and whitherware” suggests the potential of position-awareness to enhance navigation and situation awareness, especially in realtime high-definition communication interfaces, such as spatial sound augmented reality applications. Combining literal direction effects and metaphorical (remapped) distance effects in whence- and whitherware position-aware applications invites oversaturation of interface channels, encouraging interface strategies such as audio windowing, narrowcasting, and multipresence.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126409423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
AR-HUD system for tower crane on construction field 建筑工地塔吊AR-HUD系统
Pub Date : 2011-03-19 DOI: 10.1109/ISVRI.2011.5759648
Kyeong-Geun Park, Hanna Lee, Hyungseok Kim, Jee-In Kim, Hanku Lee, M. Pyeon
Recently, safety problem of construction field is becoming more serious as construction environment is getting more complex. For example, construction field can cause accidents due to many heavy materials and equipments on the field. Tower crane is the one of the heavy equipments for moving heavy materials on construction field. Tower crane driver should be able to identify all materials on ground while sitting on top of the crane which can be around 100 meters high. Especially for the invisible/hided objects and small materials, the driver needs help from individuals on the ground to get information on those materials, which often provided with hand gestures or mere shoutings. Unfortunately, those communication methods are not well recognizable in realtime on the exact position of events from the driver due to long distance and small size of gestures. In this work, we suggest an augmented reality based guidance system for tower crane. We supply visualized information to tower crane driver for important events and materials on the site at the aligned position in real-time. The augmented reality technology is adopted to present information at the aligned position where the driver is looking at. To do this, we use an head tracker to provide interaction between user and 3D viewport. From the tracked head position, the system visualizes safe/dangerous areas, wind directions/velocity, and quantities of materials on tower crane's window through a transparent screen. It is designed to provide necessary information of tasks to the tower crane drivers at real-time, to increase the safety of construction field.
近年来,随着施工环境的日益复杂,施工现场的安全问题日益严重。例如,施工现场,由于现场有许多重型材料和设备,可能会发生事故。塔式起重机是施工现场搬运重物料的重型设备之一。塔式起重机驾驶员坐在塔式起重机的顶部,能够识别地面上的所有物料,塔式起重机的高度可以在100米左右。特别是对于看不见/隐藏的物体和小材料,司机需要地面人员的帮助来获取这些材料的信息,这些信息通常是手势或仅仅是大喊大叫。不幸的是,由于距离远,手势尺寸小,这些通信方法不能很好地实时识别驾驶员的事件的确切位置。本文提出了一种基于增强现实技术的塔式起重机导航系统。为塔机司机实时提供现场重要事件和物资的可视化信息。采用增强现实技术,在驾驶员注视的对齐位置呈现信息。为此,我们使用头部跟踪器来提供用户和3D视口之间的交互。从跟踪的头部位置,系统通过透明屏幕可视化安全/危险区域、风向/风速和塔吊窗口上的物料数量。为塔机驾驶员实时提供必要的作业信息,提高施工现场的安全性。
{"title":"AR-HUD system for tower crane on construction field","authors":"Kyeong-Geun Park, Hanna Lee, Hyungseok Kim, Jee-In Kim, Hanku Lee, M. Pyeon","doi":"10.1109/ISVRI.2011.5759648","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759648","url":null,"abstract":"Recently, safety problem of construction field is becoming more serious as construction environment is getting more complex. For example, construction field can cause accidents due to many heavy materials and equipments on the field. Tower crane is the one of the heavy equipments for moving heavy materials on construction field. Tower crane driver should be able to identify all materials on ground while sitting on top of the crane which can be around 100 meters high. Especially for the invisible/hided objects and small materials, the driver needs help from individuals on the ground to get information on those materials, which often provided with hand gestures or mere shoutings. Unfortunately, those communication methods are not well recognizable in realtime on the exact position of events from the driver due to long distance and small size of gestures. In this work, we suggest an augmented reality based guidance system for tower crane. We supply visualized information to tower crane driver for important events and materials on the site at the aligned position in real-time. The augmented reality technology is adopted to present information at the aligned position where the driver is looking at. To do this, we use an head tracker to provide interaction between user and 3D viewport. From the tracked head position, the system visualizes safe/dangerous areas, wind directions/velocity, and quantities of materials on tower crane's window through a transparent screen. It is designed to provide necessary information of tasks to the tower crane drivers at real-time, to increase the safety of construction field.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130938743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
The dynamic simulator of forest evolution 森林演化动态模拟器
Pub Date : 2011-03-19 DOI: 10.1109/ISVRI.2011.5759639
Jing Fan, Xinxin Guan, Ying Tang
Forest evolution simulation and visualization are challenging tasks in terms of complex interactions at various time and space scales. In this paper we present a forest evolution simulator system, based on individual-based, spatially explicit forest gap model, incorporating with fine-scale process of neighbor competition and understory recruitment. The forest evolution for each growth cycle is visualized by taking advantage of the above simulation results to render forest scene. Users can walk through the forest scene interactively. We also adopt the billboard rendering technique to enhance the navigation experience effectively. The system is implemented by Visual C++ 6.0 and OpenGL/GLUT, and the simulation results are satisfying.
森林演化模拟和可视化是一项具有挑战性的任务,涉及不同时间和空间尺度下复杂的相互作用。本文基于基于个体的空间显式林隙模型,结合邻域竞争和林下植被补充的精细尺度过程,构建了森林进化模拟系统。利用上述模拟结果渲染森林场景,将每个生长周期的森林演变可视化。用户可以交互式地在森林场景中行走。我们还采用了广告牌渲染技术,有效地增强了导航体验。该系统在Visual c++ 6.0和OpenGL/GLUT环境下实现,仿真结果令人满意。
{"title":"The dynamic simulator of forest evolution","authors":"Jing Fan, Xinxin Guan, Ying Tang","doi":"10.1109/ISVRI.2011.5759639","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759639","url":null,"abstract":"Forest evolution simulation and visualization are challenging tasks in terms of complex interactions at various time and space scales. In this paper we present a forest evolution simulator system, based on individual-based, spatially explicit forest gap model, incorporating with fine-scale process of neighbor competition and understory recruitment. The forest evolution for each growth cycle is visualized by taking advantage of the above simulation results to render forest scene. Users can walk through the forest scene interactively. We also adopt the billboard rendering technique to enhance the navigation experience effectively. The system is implemented by Visual C++ 6.0 and OpenGL/GLUT, and the simulation results are satisfying.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133892092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Sharing space in mixed and virtual reality environments using a low-cost depth sensor 使用低成本深度传感器在混合和虚拟现实环境中共享空间
Pub Date : 2011-03-19 DOI: 10.1109/ISVRI.2011.5759673
Evan A. Suma, D. Krum, M. Bolas
We describe an approach for enabling people to share virtual space with a user that is fully immersed in a head-mounted display. By mounting a recently developed low-cost depth sensor to the user's head, depth maps can be generated in real-time based on the user's gaze direction, allowing us to create mixed reality experiences by merging real people and objects into the virtual environment. This enables verbal and nonverbal communication between users that would normally be isolated from one another. We present the implementation of the technique, then discuss the advantages and limitations of using commercially available depth sensing technology in immersive virtual reality applications.
我们描述了一种方法,使人们能够与完全沉浸在头戴式显示器中的用户共享虚拟空间。通过将最近开发的低成本深度传感器安装到用户的头上,可以根据用户的凝视方向实时生成深度图,使我们能够通过将真实的人和物体融合到虚拟环境中来创造混合现实体验。这使得通常彼此隔离的用户之间的口头和非口头通信成为可能。我们介绍了该技术的实现,然后讨论了在沉浸式虚拟现实应用中使用商用深度传感技术的优点和局限性。
{"title":"Sharing space in mixed and virtual reality environments using a low-cost depth sensor","authors":"Evan A. Suma, D. Krum, M. Bolas","doi":"10.1109/ISVRI.2011.5759673","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759673","url":null,"abstract":"We describe an approach for enabling people to share virtual space with a user that is fully immersed in a head-mounted display. By mounting a recently developed low-cost depth sensor to the user's head, depth maps can be generated in real-time based on the user's gaze direction, allowing us to create mixed reality experiences by merging real people and objects into the virtual environment. This enables verbal and nonverbal communication between users that would normally be isolated from one another. We present the implementation of the technique, then discuss the advantages and limitations of using commercially available depth sensing technology in immersive virtual reality applications.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"529 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122409001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Workplace collaboration in a 3D Virtual Office 3D虚拟办公室中的工作场所协作
Pub Date : 2011-03-19 DOI: 10.1109/ISVRI.2011.5759582
Geetika Sharma, Gautam M. Shroff, P. Dewan
We describe Virtual Office, an innovative application of virtual world technology for enabling informal office interactions and collaboration even when some of the participants are physically out of office. Each instance of the system is tied to an actual physical office, so the communication and visual channels created among its users are designed to offer the level of privacy in the corresponding real-world office. VirtualOffice supports auras and automated navigation based on logical seats in the office, rather than geometric distances. The system is implemented using a distributed MVC architecture employing a practical combination of (a) push and pull communication, and (b) cloud-based servers. The system is designed to support remote ‘management by walking around as well as virtually visiting both collaborators' and ones’ own offices, thereby enabling informal conversations that seamlessly bridge the physical and virtual worlds. VirtualOffice also represents a new point in both Benford's and Schroeder's taxonomies of collaboration systems that classifies instant messaging, virtual worlds, and video conferencing. A detailed scenario is used to motivate our new design point and compare it with commonly used as well as emerging collaboration applications as well as established virtual worlds such as Second Life, for the specific purpose of informal office collaboration.
我们描述了虚拟办公室,这是虚拟世界技术的一种创新应用,即使一些参与者不在办公室,也能实现非正式的办公室互动和协作。该系统的每个实例都与实际的物理办公室相关联,因此在其用户之间创建的通信和视觉通道旨在提供相应的现实世界办公室的隐私级别。VirtualOffice支持光环和基于办公室逻辑座位的自动导航,而不是几何距离。该系统使用分布式MVC架构实现,采用(a)推拉通信和(b)基于云的服务器的实际组合。该系统旨在支持远程“走动管理,以及虚拟访问合作者”和自己的办公室,从而实现非正式对话,无缝连接物理世界和虚拟世界。VirtualOffice还代表了Benford和Schroeder对即时消息、虚拟世界和视频会议进行分类的协作系统分类法的一个新观点。使用一个详细的场景来激励我们的新设计点,并将其与常用的和新兴的协作应用程序以及已建立的虚拟世界(如Second Life)进行比较,以实现非正式办公室协作的特定目的。
{"title":"Workplace collaboration in a 3D Virtual Office","authors":"Geetika Sharma, Gautam M. Shroff, P. Dewan","doi":"10.1109/ISVRI.2011.5759582","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759582","url":null,"abstract":"We describe Virtual Office, an innovative application of virtual world technology for enabling informal office interactions and collaboration even when some of the participants are physically out of office. Each instance of the system is tied to an actual physical office, so the communication and visual channels created among its users are designed to offer the level of privacy in the corresponding real-world office. VirtualOffice supports auras and automated navigation based on logical seats in the office, rather than geometric distances. The system is implemented using a distributed MVC architecture employing a practical combination of (a) push and pull communication, and (b) cloud-based servers. The system is designed to support remote ‘management by walking around as well as virtually visiting both collaborators' and ones’ own offices, thereby enabling informal conversations that seamlessly bridge the physical and virtual worlds. VirtualOffice also represents a new point in both Benford's and Schroeder's taxonomies of collaboration systems that classifies instant messaging, virtual worlds, and video conferencing. A detailed scenario is used to motivate our new design point and compare it with commonly used as well as emerging collaboration applications as well as established virtual worlds such as Second Life, for the specific purpose of informal office collaboration.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127723506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Automatic skeleton generation and character skinning 自动骨架生成和角色蒙皮
Pub Date : 2011-03-19 DOI: 10.1109/ISVRI.2011.5759655
Wang Cheng, Ren Cheng, Xiaoyong Lei, Shuling Dai
An articulated character skinning usually requires manual skeleton embedding and vertex weight painting. We propose a fast and automatic method for skeleton generation and character skinning. First, we segment the given character mesh by the sequential steps of NCV(normal characteristic value) computation, segment points refinement, and principal component analysis of segment clusters. Then, two types of joints and a skeleton are generated based on the mesh segmentation result. Furthermore, we compute weights of vertices influenced by skeleton automatically and then skinning the character mesh by skeleton driven and muscle pushing algorithm. Experimental results show that our method achieves both high visual quality and fast speed. It could be used in character animation and VR real time applications.
铰接式角色蒙皮通常需要手工骨架嵌入和顶点权重绘制。提出了一种快速自动生成骨架和角色蒙皮的方法。首先,通过正常特征值(NCV)计算、片段点细化和片段聚类主成分分析的顺序步骤对给定的特征网格进行分割。然后,根据网格分割结果生成两种类型的关节和一个骨架。在此基础上,自动计算受骨架影响顶点的权重,并采用骨架驱动和肌肉推动算法对特征网格进行蒙皮。实验结果表明,该方法既具有较高的视觉质量,又具有较快的速度。它可以用于角色动画和VR实时应用。
{"title":"Automatic skeleton generation and character skinning","authors":"Wang Cheng, Ren Cheng, Xiaoyong Lei, Shuling Dai","doi":"10.1109/ISVRI.2011.5759655","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759655","url":null,"abstract":"An articulated character skinning usually requires manual skeleton embedding and vertex weight painting. We propose a fast and automatic method for skeleton generation and character skinning. First, we segment the given character mesh by the sequential steps of NCV(normal characteristic value) computation, segment points refinement, and principal component analysis of segment clusters. Then, two types of joints and a skeleton are generated based on the mesh segmentation result. Furthermore, we compute weights of vertices influenced by skeleton automatically and then skinning the character mesh by skeleton driven and muscle pushing algorithm. Experimental results show that our method achieves both high visual quality and fast speed. It could be used in character animation and VR real time applications.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132273141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
2011 IEEE International Symposium on VR Innovation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1