首页 > 最新文献

2009 IEEE Virtual Reality Conference最新文献

英文 中文
Virtual Experiences for Social Perspective-Taking 社会视角获取的虚拟体验
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811005
A. Raij, Aaron Kotranza, D. Lind, Benjamin C. Lok
This paper proposes virtual social perspective-taking (VSP). In VSP, users are immersed in an experience of another person to aid in understanding the person's perspective. Users are immersed by 1) providing input to user senses from logs of the target person's senses, 2) instructing users to act and interact like the target, and 3) reminding users that they are playing the role of the target. These guidelines are applied to a scenario where taking the perspective of others is crucial - the medical interview. A pilot study (n = 16) using this scenario indicates VSP elicits reflection on the perspectives of others and changes behavior in future, similar social interactions. By encouraging reflection and change, VSP advances the state-of-the-art in training social interactions with virtual experiences.
本文提出虚拟社会换位思考(VSP)。在VSP中,用户沉浸在另一个人的体验中,以帮助理解这个人的观点。1)从目标人的感官日志中为用户提供感官输入,2)指导用户像目标一样行动和互动,3)提醒用户他们正在扮演目标的角色,从而让用户沉浸其中。这些指导原则适用于一个场景,在这个场景中,从他人的角度来看是至关重要的——医疗面试。一项试点研究(n = 16)表明,VSP引发了对他人观点的反思,并在未来类似的社会互动中改变了行为。通过鼓励反思和改变,VSP在通过虚拟体验训练社交互动方面取得了进步。
{"title":"Virtual Experiences for Social Perspective-Taking","authors":"A. Raij, Aaron Kotranza, D. Lind, Benjamin C. Lok","doi":"10.1109/VR.2009.4811005","DOIUrl":"https://doi.org/10.1109/VR.2009.4811005","url":null,"abstract":"This paper proposes virtual social perspective-taking (VSP). In VSP, users are immersed in an experience of another person to aid in understanding the person's perspective. Users are immersed by 1) providing input to user senses from logs of the target person's senses, 2) instructing users to act and interact like the target, and 3) reminding users that they are playing the role of the target. These guidelines are applied to a scenario where taking the perspective of others is crucial - the medical interview. A pilot study (n = 16) using this scenario indicates VSP elicits reflection on the perspectives of others and changes behavior in future, similar social interactions. By encouraging reflection and change, VSP advances the state-of-the-art in training social interactions with virtual experiences.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133997114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
A VR Multimodal Interface for Small Artifacts in the Gold Museum 黄金博物馆小型文物的虚拟现实多模式界面
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811061
P. Figueroa, J. Borda, Diego Restrepo, P. Boulanger, Eduardo Londoño, F. Prieto
The Gold Museum, in Bogotá, Colombia, displays the largest collection of pre-Hispanic gold artifacts in the world and it has been renovated recently. With funds from the Colombian Government, we have created a multimodal experience that allows visitors to touch, hear, and see small artifacts. Here we present a description of this demo, its functionality, and technical requirements.
位于哥伦比亚波哥大的黄金博物馆展出了世界上最大的前西班牙时期的黄金制品,最近进行了翻新。在哥伦比亚政府的资助下,我们创造了一种多模式的体验,让游客可以触摸、听到和看到小文物。在这里,我们将介绍这个演示的描述、它的功能和技术需求。
{"title":"A VR Multimodal Interface for Small Artifacts in the Gold Museum","authors":"P. Figueroa, J. Borda, Diego Restrepo, P. Boulanger, Eduardo Londoño, F. Prieto","doi":"10.1109/VR.2009.4811061","DOIUrl":"https://doi.org/10.1109/VR.2009.4811061","url":null,"abstract":"The Gold Museum, in Bogotá, Colombia, displays the largest collection of pre-Hispanic gold artifacts in the world and it has been renovated recently. With funds from the Colombian Government, we have created a multimodal experience that allows visitors to touch, hear, and see small artifacts. Here we present a description of this demo, its functionality, and technical requirements.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"250 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116003600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Crossover Applications 跨应用程序
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811068
Brian Wilke, Jonathan Metzgar, Keith Johnson, S. Semwal, B. Snyder, KaChun Yu, D. Neafus
VR applications provide an opportunity to study a variety of new applications. One of the focus areas of the media convergence, games and media integration (McGMI) program is to develop new media applications for the visually impaired population. We are particularly interested in developing applications which are at the same time interesting for the sighted population as well¿hence the title ¿ crossover applications. Bonnie Snyder, who has been working with the visually impaired population for more than twenty years, visited a group of students early in the Fall 2008 As many typical applications are geared toward sighted population, the cost of software and hardware systems tend to be a lot higher. In addition, several games, developed for primarily the sighted, provide minimal interaction for the blind. Although this issue remains a topic of discussion in both IEEE VR and ISMAR and related conferences, much more can be done. We used this as motivation and developed three applications for both the sighted and the visually impaired population (a) Hatpic chess program combines PHANToM force feedback interaction with OpenAL audio; (b) Simple hand movement recognition on iPhone provides a hierarchical menu application; (c) Barnyard fun program uses interesting animal-sound feedback to facilitate spatial selection. In future, we expect to conduct testing of these applications in Denver Museum as possible.
虚拟现实应用为研究各种新应用提供了机会。媒体融合,游戏和媒体整合(McGMI)计划的重点领域之一是为视障人群开发新的媒体应用程序。我们特别感兴趣的是开发同时对视力正常的人群也很有趣的应用程序,因此称为“交叉应用程序”。邦妮·斯奈德(Bonnie Snyder)从事视障人群的工作已有20多年了,她在2008年秋天早些时候拜访了一群学生。由于许多典型的应用程序都是面向视力良好的人群的,因此软件和硬件系统的成本往往要高得多。此外,一些主要为有视力的人开发的游戏为盲人提供了最少的互动。虽然这个问题仍然是IEEE VR和ISMAR以及相关会议讨论的主题,但可以做的还有很多。我们以此为动力,为视力正常和视障人群开发了三个应用程序(a) Hatpic国际象棋程序结合了PHANToM力反馈交互和OpenAL音频;(b) iPhone上简单的手部动作识别提供了一个分层菜单应用程序;(c) Barnyard娱乐程序使用有趣的动物声音反馈,方便空间选择。未来,我们希望尽可能在丹佛博物馆对这些应用程序进行测试。
{"title":"Crossover Applications","authors":"Brian Wilke, Jonathan Metzgar, Keith Johnson, S. Semwal, B. Snyder, KaChun Yu, D. Neafus","doi":"10.1109/VR.2009.4811068","DOIUrl":"https://doi.org/10.1109/VR.2009.4811068","url":null,"abstract":"VR applications provide an opportunity to study a variety of new applications. One of the focus areas of the media convergence, games and media integration (McGMI) program is to develop new media applications for the visually impaired population. We are particularly interested in developing applications which are at the same time interesting for the sighted population as well¿hence the title ¿ crossover applications. Bonnie Snyder, who has been working with the visually impaired population for more than twenty years, visited a group of students early in the Fall 2008 As many typical applications are geared toward sighted population, the cost of software and hardware systems tend to be a lot higher. In addition, several games, developed for primarily the sighted, provide minimal interaction for the blind. Although this issue remains a topic of discussion in both IEEE VR and ISMAR and related conferences, much more can be done. We used this as motivation and developed three applications for both the sighted and the visually impaired population (a) Hatpic chess program combines PHANToM force feedback interaction with OpenAL audio; (b) Simple hand movement recognition on iPhone provides a hierarchical menu application; (c) Barnyard fun program uses interesting animal-sound feedback to facilitate spatial selection. In future, we expect to conduct testing of these applications in Denver Museum as possible.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116302967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Virtual Welder Trainer 虚拟焊工培训师
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811066
Steven A. White, Mores Prachyabrued, Dhruva Baghi, Amit Aglawe, D. Reiners, C. Borst, Terry Chambers
The goal of this project is to develop a training system that can simulate the welding process in real-time and give feedback that avoids learning wrong motion patterns for beginning welders and can be used to analyze the process by the teacher afterwards. The system is based mainly on COTS components. A standard PC with a Dual-core CPU and a medium-end nVidia graphics card is sufficient. Input is done with a regular welding gun to allow realistic training. The gun is tracked by an OptiTrack system with 3 FLEX:V100 cameras. The same is also used to track a regular welding helmet to get accurate eye positions for display, which was chosen over glasses for robustness. The display itself is a Zalman Trimon stereo monitor that is laid out horizontally. The software is designed around a main simulation component for solving heat conduction on a grid of simulation points based on local GaussSeidel elimination.
这个项目的目标是开发一个培训系统,可以实时模拟焊接过程并给出反馈,避免初学者学习错误的动作模式,并可供教师事后分析过程。该系统主要基于COTS组件。配备双核CPU和中端nVidia显卡的标准PC就足够了。输入是用常规焊枪完成的,以便进行实际的训练。该枪由OptiTrack系统跟踪,该系统带有3个FLEX:V100摄像机。同样的技术也被用于跟踪一个普通的焊接头盔,以获得准确的眼睛位置,而不是眼镜,因为它的坚固性。显示器本身是一个水平放置的Zalman Trimon立体声显示器。该软件是围绕一个主要的模拟组件设计的,用于求解基于局部高斯塞德尔消去的模拟点网格上的热传导。
{"title":"Virtual Welder Trainer","authors":"Steven A. White, Mores Prachyabrued, Dhruva Baghi, Amit Aglawe, D. Reiners, C. Borst, Terry Chambers","doi":"10.1109/VR.2009.4811066","DOIUrl":"https://doi.org/10.1109/VR.2009.4811066","url":null,"abstract":"The goal of this project is to develop a training system that can simulate the welding process in real-time and give feedback that avoids learning wrong motion patterns for beginning welders and can be used to analyze the process by the teacher afterwards. The system is based mainly on COTS components. A standard PC with a Dual-core CPU and a medium-end nVidia graphics card is sufficient. Input is done with a regular welding gun to allow realistic training. The gun is tracked by an OptiTrack system with 3 FLEX:V100 cameras. The same is also used to track a regular welding helmet to get accurate eye positions for display, which was chosen over glasses for robustness. The display itself is a Zalman Trimon stereo monitor that is laid out horizontally. The software is designed around a main simulation component for solving heat conduction on a grid of simulation points based on local GaussSeidel elimination.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129958495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
False Image Projector For Head Mounted Display Using Retrotransmissive Optical System 使用反透射光学系统的头戴式显示器假像投影仪
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811063
R. Kijima, J. Watanabe
So called "false image projector" with a novel notion "retrotransmission" is proposed and early prototype will be shown in the demo. This article explains the other research activities of authors' lab as well.
提出了一种新颖的“逆向传输”概念,即所谓的“伪图像投影仪”,并将在演示中展示早期原型。本文还介绍了作者实验室的其他研究活动。
{"title":"False Image Projector For Head Mounted Display Using Retrotransmissive Optical System","authors":"R. Kijima, J. Watanabe","doi":"10.1109/VR.2009.4811063","DOIUrl":"https://doi.org/10.1109/VR.2009.4811063","url":null,"abstract":"So called \"false image projector\" with a novel notion \"retrotransmission\" is proposed and early prototype will be shown in the demo. This article explains the other research activities of authors' lab as well.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129409165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Crafting Personalized Facial Avatars Using Editable Portrait and Photograph Example 使用可编辑的肖像和照片示例制作个性化的面部头像
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811044
Tanasai Sucontphunt, Z. Deng, U. Neumann
Computer-generated facial avatars have been increasingly used in a variety of virtual reality applications. Emulating the real-world face sculpting process, we present an interactive system to intuitively craft personalized 3D facial avatars by using 3D portrait editing and image example-based painting techniques. Starting from a default 3D face portrait, users can conveniently perform intuitive "pulling" operations on its 3D surface to sculpt the 3D face shape towards any individual. To automatically maintain the faceness of the 3D face being crafted, novel facial anthropometry constraints and a reduced face description space are incorporated into the crafting algorithms dynamically. Once the 3D face geometry is crafted, this system can automatically generate a face texture for the crafted model using an image example-based painting algorithm. Our user studies showed that with this system, users are able to craft a personalized 3D facial avatar efficiently on average within one minute.
计算机生成的面部化身在各种虚拟现实应用中得到越来越多的应用。模拟现实世界的面部雕刻过程,我们提出了一个交互式系统,直观地制作个性化的三维面部化身,使用三维肖像编辑和基于图像示例的绘画技术。从默认的3D人脸肖像开始,用户可以方便地对其3D表面进行直观的“拉”操作,以塑造任何个人的3D脸型。为了自动保持被制作的三维人脸的真实感,在制作算法中动态地引入了新的面部人体测量约束和减少的人脸描述空间。一旦3D面部几何形状被精心制作,该系统可以使用基于图像示例的绘画算法自动为精心制作的模型生成面部纹理。我们的用户研究表明,使用该系统,用户可以在平均一分钟内有效地制作个性化的3D面部形象。
{"title":"Crafting Personalized Facial Avatars Using Editable Portrait and Photograph Example","authors":"Tanasai Sucontphunt, Z. Deng, U. Neumann","doi":"10.1109/VR.2009.4811044","DOIUrl":"https://doi.org/10.1109/VR.2009.4811044","url":null,"abstract":"Computer-generated facial avatars have been increasingly used in a variety of virtual reality applications. Emulating the real-world face sculpting process, we present an interactive system to intuitively craft personalized 3D facial avatars by using 3D portrait editing and image example-based painting techniques. Starting from a default 3D face portrait, users can conveniently perform intuitive \"pulling\" operations on its 3D surface to sculpt the 3D face shape towards any individual. To automatically maintain the faceness of the 3D face being crafted, novel facial anthropometry constraints and a reduced face description space are incorporated into the crafting algorithms dynamically. Once the 3D face geometry is crafted, this system can automatically generate a face texture for the crafted model using an image example-based painting algorithm. Our user studies showed that with this system, users are able to craft a personalized 3D facial avatar efficiently on average within one minute.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126771624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Measurement Protocols for Medium-Field Distance Perception in Large-Screen Immersive Displays 大屏幕沉浸式显示器中场距离感知的测量协议
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811007
Eric Klein, J. Swan, G. Schmidt, M. Livingston, O. Staadt
How do users of virtual environments perceive virtual space? Many experiments have explored this question, but most of these have used head-mounted immersive displays. This paper reports an experiment that studied large-screen immersive displays at medium-field distances of 2 to 15 meters. The experiment measured ego-centric depth judgments in a CAVE, a tiled display wall, and a real-world outdoor field as a control condition. We carefully modeled the outdoor field to make the three environments as similar as possible. Measuring egocentric depth judgments in large-screen immersive displays requires adapting new measurement protocols; the experiment used timed imagined walking, verbal estimation, and triangulated blind walking. We found that depth judgments from timed imagined walking and verbal estimation were very similar in all three environments. However, triangulated blind walking was accurate only in the out-door field; in the large-screen immersive displays it showed under-estimation effects that were likely caused by insufficient physical space to perform the technique. These results suggest using timed imagined walking as a primary protocol for assessing depth perception in large-screen immersive displays. We also found that depth judgments in the CAVE were more accurate than in the tiled display wall, which suggests that the peripheral scenery offered by the CAVE is helpful when perceiving virtual space.
虚拟环境的用户如何感知虚拟空间?许多实验都在探索这个问题,但大多数都是使用头戴式沉浸式显示器。本文报道了一项实验,研究了大屏幕沉浸式显示在2至15米的中场距离。实验测量了在洞穴、瓷砖展示墙和现实世界的户外场地作为控制条件下的以自我为中心的深度判断。我们仔细地模拟了室外场地,使三个环境尽可能相似。在大屏幕沉浸式显示中测量自我中心深度判断需要采用新的测量协议;实验采用定时想象行走、言语估计和三角盲行。我们发现,在这三种环境中,时间想象行走和口头估计的深度判断非常相似。然而,三角法盲行仅在室外准确;在大屏幕沉浸式显示器中,它显示出低估的效果,这可能是由于物理空间不足导致的。这些结果表明,使用定时想象行走作为评估大屏幕沉浸式显示器深度感知的主要协议。我们还发现,在洞穴中深度判断比在瓷砖显示墙上更准确,这表明洞穴提供的周边风景有助于感知虚拟空间。
{"title":"Measurement Protocols for Medium-Field Distance Perception in Large-Screen Immersive Displays","authors":"Eric Klein, J. Swan, G. Schmidt, M. Livingston, O. Staadt","doi":"10.1109/VR.2009.4811007","DOIUrl":"https://doi.org/10.1109/VR.2009.4811007","url":null,"abstract":"How do users of virtual environments perceive virtual space? Many experiments have explored this question, but most of these have used head-mounted immersive displays. This paper reports an experiment that studied large-screen immersive displays at medium-field distances of 2 to 15 meters. The experiment measured ego-centric depth judgments in a CAVE, a tiled display wall, and a real-world outdoor field as a control condition. We carefully modeled the outdoor field to make the three environments as similar as possible. Measuring egocentric depth judgments in large-screen immersive displays requires adapting new measurement protocols; the experiment used timed imagined walking, verbal estimation, and triangulated blind walking. We found that depth judgments from timed imagined walking and verbal estimation were very similar in all three environments. However, triangulated blind walking was accurate only in the out-door field; in the large-screen immersive displays it showed under-estimation effects that were likely caused by insufficient physical space to perform the technique. These results suggest using timed imagined walking as a primary protocol for assessing depth perception in large-screen immersive displays. We also found that depth judgments in the CAVE were more accurate than in the tiled display wall, which suggests that the peripheral scenery offered by the CAVE is helpful when perceiving virtual space.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115157101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 71
Multiple Behaviors Generation by 1 D.O.F. Mobile Robot 单自由度移动机器人的多行为生成
Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811069
Teppei Toyoizumi, S. Yonekura, R. Tadakuma, Y. Kawaguchi, A. Kamimura
In this research, we developed a sphere-shaped mobile robot that can generate multiple behaviors by using only one motor. The robot can generate the translational motion and the rotational motion by controlling the motion of the motor. The motor itself acts as an eccentric weight during motions. To generate emergent behaviors, many protrusions are mounted on the surface of the spherical body. The emergent behaviors occur by an interaction between the external world and these protrusions when the sphere is vibrating, and the robot can move in a random walk manner.
在这项研究中,我们开发了一种球形移动机器人,它可以通过一个马达产生多种行为。机器人通过控制电机的运动产生平移运动和旋转运动。马达本身在运动时起偏心重量的作用。为了产生紧急行为,在球体表面安装了许多突起。当球体振动时,这些突起与外部世界相互作用产生紧急行为,机器人可以随机行走。
{"title":"Multiple Behaviors Generation by 1 D.O.F. Mobile Robot","authors":"Teppei Toyoizumi, S. Yonekura, R. Tadakuma, Y. Kawaguchi, A. Kamimura","doi":"10.1109/VR.2009.4811069","DOIUrl":"https://doi.org/10.1109/VR.2009.4811069","url":null,"abstract":"In this research, we developed a sphere-shaped mobile robot that can generate multiple behaviors by using only one motor. The robot can generate the translational motion and the rotational motion by controlling the motion of the motor. The motor itself acts as an eccentric weight during motions. To generate emergent behaviors, many protrusions are mounted on the surface of the spherical body. The emergent behaviors occur by an interaction between the external world and these protrusions when the sphere is vibrating, and the robot can move in a random walk manner.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114900598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Creation of Massive Virtual Cities 大规模虚拟城市的自动创建
Pub Date : 2009-03-01 DOI: 10.1109/VR.2009.4811023
Charalambos (Charis) Poullis, Suya You
This research effort focuses on the historically-difficult problem of creating large-scale (city size) scene models from sensor data, including rapid extraction and modeling of geometry models. The solution to this problem is sought in the development of a novel modeling system with a fully automatic technique for the extraction of polygonal 3D models from LiDAR (Light Detection And Ranging) data. The result is an accurate 3D model representation of the real-world as shown in Figure 1. We present and evaluate experimental results of our approach for the automatic reconstruction of large U. S. cities.
这项研究的重点是从传感器数据中创建大规模(城市大小)场景模型的历史难题,包括快速提取和建模几何模型。解决这一问题的方法是开发一种新的建模系统,该系统具有从激光雷达(光探测和测距)数据中提取多边形3D模型的全自动技术。其结果是真实世界的精确3D模型表示,如图1所示。我们提出并评估了我们的方法在美国大城市自动重建中的实验结果。
{"title":"Automatic Creation of Massive Virtual Cities","authors":"Charalambos (Charis) Poullis, Suya You","doi":"10.1109/VR.2009.4811023","DOIUrl":"https://doi.org/10.1109/VR.2009.4811023","url":null,"abstract":"This research effort focuses on the historically-difficult problem of creating large-scale (city size) scene models from sensor data, including rapid extraction and modeling of geometry models. The solution to this problem is sought in the development of a novel modeling system with a fully automatic technique for the extraction of polygonal 3D models from LiDAR (Light Detection And Ranging) data. The result is an accurate 3D model representation of the real-world as shown in Figure 1. We present and evaluate experimental results of our approach for the automatic reconstruction of large U. S. cities.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132575942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Interactive Virtual Reality Simulation for Nanoparticle Manipulation and Nanoassembly using Optical Tweezers 基于光镊的纳米粒子操作和纳米装配的交互式虚拟现实仿真
Pub Date : 2008-06-22 DOI: 10.1109/VR.2009.4811040
Krishna C. Bhavaraju
Nanotechnology is one of the most promising technologies for future development. This paper proposes virtual reality (VR) as a tool to simulate nano particle manipulation using optical tweezers towards achieving nano-assembly and to handle effectively issues such as difficulty in viewing, perceiving and controlling the nano-scale objects. The simulation modeled using virtual reality displays all the forces acting on nanoparticle during the manipulation. The simulation is developed for particles that belong to the Rayleigh region and represents interactions of OT (a laser beam) with the nanoparticle. The laser beam aimed on to the nanoparticle traps the particle by applying optical forces. The trapped particle is then moved by moving the laser beam. The proposed VR based simulation tool with it capabilities can be easily extended and used for creating and open system framework by connecting it to a real OT setup to control nanoparticles manipulation. In addition, a feedback system can be build to increase of precision of movement.
纳米技术是未来发展最有前途的技术之一。本文提出利用虚拟现实技术模拟光镊对纳米粒子的操纵,以实现纳米组装,并有效地解决纳米尺度物体的观察、感知和控制困难等问题。利用虚拟现实技术建立的仿真模型显示了在操纵过程中作用在纳米粒子上的所有力。该模拟是针对属于瑞利区并表示OT(激光束)与纳米颗粒相互作用的粒子开发的。瞄准纳米粒子的激光束通过施加光力来捕获粒子。然后通过移动激光束来移动被捕获的粒子。所提出的基于VR的仿真工具具有其功能,可以很容易地扩展和用于创建一个开放的系统框架,通过将其连接到一个真实的OT设置来控制纳米颗粒的操作。此外,还可以建立一个反馈系统,以提高运动精度。
{"title":"Interactive Virtual Reality Simulation for Nanoparticle Manipulation and Nanoassembly using Optical Tweezers","authors":"Krishna C. Bhavaraju","doi":"10.1109/VR.2009.4811040","DOIUrl":"https://doi.org/10.1109/VR.2009.4811040","url":null,"abstract":"Nanotechnology is one of the most promising technologies for future development. This paper proposes virtual reality (VR) as a tool to simulate nano particle manipulation using optical tweezers towards achieving nano-assembly and to handle effectively issues such as difficulty in viewing, perceiving and controlling the nano-scale objects. The simulation modeled using virtual reality displays all the forces acting on nanoparticle during the manipulation. The simulation is developed for particles that belong to the Rayleigh region and represents interactions of OT (a laser beam) with the nanoparticle. The laser beam aimed on to the nanoparticle traps the particle by applying optical forces. The trapped particle is then moved by moving the laser beam. The proposed VR based simulation tool with it capabilities can be easily extended and used for creating and open system framework by connecting it to a real OT setup to control nanoparticles manipulation. In addition, a feedback system can be build to increase of precision of movement.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114494845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2009 IEEE Virtual Reality Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1