首页 > 最新文献

2012 14th Symposium on Virtual and Augmented Reality最新文献

英文 中文
Dynamic Cloth Simulation: A Comparative Study of Explicit and Implicit Numerical Integration 动态布模拟:显式和隐式数值积分的比较研究
Pub Date : 2012-05-28 DOI: 10.1109/SVR.2012.11
Laise Lima De Carvalho, C. Vidal, J. B. C. Neto, Suzana Matos França de Oliveira
Physically based cloth animation has gained much attention from researchers in the last two decades, due to the challenges of realism placed by the film and game industries, as well as by the applications of virtual reality and e-commerce. Despite the overwhelming achievements in this area, a deeper understanding of the numerical techniques involved in the simulations is still in order. This paper analyzes the behavior of some useful integration techniques, and tests them in three typical simulations of cloth animation.
在过去的二十年里,由于电影和游戏行业以及虚拟现实和电子商务的应用对现实主义的挑战,基于物理的布动画受到了研究人员的广泛关注。尽管在这一领域取得了压倒性的成就,但对模拟中涉及的数值技术的更深层次的理解仍然是有序的。本文分析了一些有用的集成技术的行为,并在三个典型的布料动画仿真中进行了测试。
{"title":"Dynamic Cloth Simulation: A Comparative Study of Explicit and Implicit Numerical Integration","authors":"Laise Lima De Carvalho, C. Vidal, J. B. C. Neto, Suzana Matos França de Oliveira","doi":"10.1109/SVR.2012.11","DOIUrl":"https://doi.org/10.1109/SVR.2012.11","url":null,"abstract":"Physically based cloth animation has gained much attention from researchers in the last two decades, due to the challenges of realism placed by the film and game industries, as well as by the applications of virtual reality and e-commerce. Despite the overwhelming achievements in this area, a deeper understanding of the numerical techniques involved in the simulations is still in order. This paper analyzes the behavior of some useful integration techniques, and tests them in three typical simulations of cloth animation.","PeriodicalId":319713,"journal":{"name":"2012 14th Symposium on Virtual and Augmented Reality","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133275926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
TrueSight A Pedestrian Navigation System Based in Automatic Landmark Detection and Extraction on Android Smartphone TrueSight一种基于Android智能手机的行人导航系统
Pub Date : 2012-05-28 DOI: 10.1109/SVR.2012.14
Alessandro Luiz Stamatto Ferreira, S. R. D. Santos, Leonardo Cunha de Miranda
From time to time someone gets lost and askhimself "How do I get there?" With the advent of the GPS this question can be answered. However due to difficulties such as lack of precision, possibility of inaccurate maps,network dependency, and cost lead the pursuit of analternative solution. In order to locate himself the person can use a different method: using a smartphone camera his position is recognized visually, based in environment references, and then an arrow pointing the right direction appears in a map in the display. This method was implemented in the application framework of Android,using OpenCV and its implementation of the SURFalgorithm. The final application is named TrueSight and westudy its viability and limitations. The authors conclude thata vision-based navigation system is viable, but database improvements and exhibition could make it better.
有时,有人迷路了,就会问自己:“我怎么去那里?”随着全球定位系统的出现,这个问题可以得到回答。然而,由于缺乏精度,可能出现不准确的地图,网络依赖性和成本等困难导致追求替代解决方案。为了确定自己的位置,人们可以使用不同的方法:使用智能手机摄像头,根据环境参考来视觉识别他的位置,然后在显示的地图上出现指向正确方向的箭头。该方法在Android的应用框架下,使用OpenCV及其surf算法实现。最后的应用程序被命名为TrueSight,我们研究了它的可行性和局限性。作者得出结论,基于视觉的导航系统是可行的,但数据库的改进和展示可以使它更好。
{"title":"TrueSight A Pedestrian Navigation System Based in Automatic Landmark Detection and Extraction on Android Smartphone","authors":"Alessandro Luiz Stamatto Ferreira, S. R. D. Santos, Leonardo Cunha de Miranda","doi":"10.1109/SVR.2012.14","DOIUrl":"https://doi.org/10.1109/SVR.2012.14","url":null,"abstract":"From time to time someone gets lost and askhimself \"How do I get there?\" With the advent of the GPS this question can be answered. However due to difficulties such as lack of precision, possibility of inaccurate maps,network dependency, and cost lead the pursuit of analternative solution. In order to locate himself the person can use a different method: using a smartphone camera his position is recognized visually, based in environment references, and then an arrow pointing the right direction appears in a map in the display. This method was implemented in the application framework of Android,using OpenCV and its implementation of the SURFalgorithm. The final application is named TrueSight and westudy its viability and limitations. The authors conclude thata vision-based navigation system is viable, but database improvements and exhibition could make it better.","PeriodicalId":319713,"journal":{"name":"2012 14th Symposium on Virtual and Augmented Reality","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116464598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Dance2Rehab3D: A 3D Virtual Rehabilitation Game Dance2Rehab3D:一个3D虚拟康复游戏
Pub Date : 2012-05-28 DOI: 10.1109/SVR.2012.30
Alessandro Diogo Brückheimer, M. Hounsell, A. V. Soares
Keeping patients into long term therapy seem to be as beneficial as the therapy itself. The use of computers to achieve engagement and motivation has been sought as a medium that not only give entertainment but real therapy benefits. The use of some interaction devices (such as mouse) however is a limiting feature to some patients with motor disabilities. Existing camera-based games do not reason with the whole spectrum of movements required by therapy. Recent development and popularization of depth cameras made it possible to develop interfaces that can explore users' 3D movements with no device to hold. This paper presents a game-like virtual environment where controllable situations are generated and users limitations are considered in order to foster movements on an interesting and relaxed set of activities. The development has shown that a close collaboration between physiotherapists and computer scientists are mandatory in order to achieve a useful application.
让病人接受长期治疗似乎和治疗本身一样有益。人们一直在寻求使用电脑来实现参与和激励,作为一种媒介,它不仅能提供娱乐,还能带来真正的治疗效果。然而,使用一些互动设备(如鼠标)对一些运动障碍患者来说是一个限制。现有的基于摄像头的游戏并没有考虑到治疗所需的整个动作范围。最近深度相机的发展和普及使得开发无需手持设备即可探索用户3D运动的界面成为可能。本文提出了一个类似游戏的虚拟环境,其中生成了可控的情况,并考虑了用户的限制,以促进在一系列有趣和放松的活动中移动。这一发展表明,为了实现有用的应用,物理治疗师和计算机科学家之间的密切合作是必不可少的。
{"title":"Dance2Rehab3D: A 3D Virtual Rehabilitation Game","authors":"Alessandro Diogo Brückheimer, M. Hounsell, A. V. Soares","doi":"10.1109/SVR.2012.30","DOIUrl":"https://doi.org/10.1109/SVR.2012.30","url":null,"abstract":"Keeping patients into long term therapy seem to be as beneficial as the therapy itself. The use of computers to achieve engagement and motivation has been sought as a medium that not only give entertainment but real therapy benefits. The use of some interaction devices (such as mouse) however is a limiting feature to some patients with motor disabilities. Existing camera-based games do not reason with the whole spectrum of movements required by therapy. Recent development and popularization of depth cameras made it possible to develop interfaces that can explore users' 3D movements with no device to hold. This paper presents a game-like virtual environment where controllable situations are generated and users limitations are considered in order to foster movements on an interesting and relaxed set of activities. The development has shown that a close collaboration between physiotherapists and computer scientists are mandatory in order to achieve a useful application.","PeriodicalId":319713,"journal":{"name":"2012 14th Symposium on Virtual and Augmented Reality","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129702062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Virtual Table -- Teleporter: Image Processing and Rendering for Horizontal Stereoscopic Display 虚拟表——传送器:水平立体显示的图像处理和渲染
Pub Date : 2012-05-28 DOI: 10.1109/SVR.2012.31
B. Madeira, L. Velho
We describe a new architecture composed by software and hardware for displaying stereoscopic images over a horizontal surface. It works as a "Virtual Table and Teleporter'', in the sense that virtual objects depicted over a table have the appearance of real objects. This system can be used for visualization and interaction. We propose two basic configurations: the Virtual Table, consisting of a single display surface, and the Virtual Teleporter, consisting of a pair of tables for image capture and display. The Virtual Table displays either 3D computer generated images or previously captured stereoscopic video and can be used for interactive applications. The Virtual Teleporter captures and transmits stereoscopic video from one table to the other and can be used for telepresence applications. In both configurations the images are properly deformed and displayed for horizontal 3D stereo. In the Virtual Teleporter, two cameras are pointed to the first table, capturing a stereoscopic image pair. These images are shown on the second table, that is in fact a stereoscopic display positioned horizontally. Many applications can benefit from this technology such as, virtual reality, games, teleconference and distance learning. We present some interactive applications, that we developed using this architecture.
我们描述了一种由软件和硬件组成的新架构,用于在水平表面上显示立体图像。它就像一个“虚拟的桌子和传送器”,从某种意义上说,在桌子上描绘的虚拟物体具有真实物体的外观。该系统可用于可视化和交互。我们提出了两种基本配置:虚拟表,由单个显示表面组成,以及虚拟传送器,由一对用于图像捕获和显示的表组成。虚拟表显示3D计算机生成的图像或先前捕获的立体视频,并可用于交互式应用程序。虚拟传送器捕获并传输立体视频从一个表到另一个表,并可用于远程呈现应用。在这两种配置中,图像被适当地变形并显示为水平3D立体。在虚拟传送器中,两台摄像机指向第一张桌子,捕捉到一对立体图像。这些图像显示在第二张表上,这实际上是一个水平放置的立体显示器。许多应用程序都可以从这项技术中受益,如虚拟现实、游戏、电话会议和远程学习。我们将介绍一些使用该体系结构开发的交互式应用程序。
{"title":"Virtual Table -- Teleporter: Image Processing and Rendering for Horizontal Stereoscopic Display","authors":"B. Madeira, L. Velho","doi":"10.1109/SVR.2012.31","DOIUrl":"https://doi.org/10.1109/SVR.2012.31","url":null,"abstract":"We describe a new architecture composed by software and hardware for displaying stereoscopic images over a horizontal surface. It works as a \"Virtual Table and Teleporter'', in the sense that virtual objects depicted over a table have the appearance of real objects. This system can be used for visualization and interaction. We propose two basic configurations: the Virtual Table, consisting of a single display surface, and the Virtual Teleporter, consisting of a pair of tables for image capture and display. The Virtual Table displays either 3D computer generated images or previously captured stereoscopic video and can be used for interactive applications. The Virtual Teleporter captures and transmits stereoscopic video from one table to the other and can be used for telepresence applications. In both configurations the images are properly deformed and displayed for horizontal 3D stereo. In the Virtual Teleporter, two cameras are pointed to the first table, capturing a stereoscopic image pair. These images are shown on the second table, that is in fact a stereoscopic display positioned horizontally. Many applications can benefit from this technology such as, virtual reality, games, teleconference and distance learning. We present some interactive applications, that we developed using this architecture.","PeriodicalId":319713,"journal":{"name":"2012 14th Symposium on Virtual and Augmented Reality","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121046418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A Case Study on the Implementation of the 3C Collaboration Model in Virtual Environments 虚拟环境下3C协作模型实现的案例研究
Pub Date : 2012-05-28 DOI: 10.1109/SVR.2012.28
Daniel Medeiros, E. R. Silva, Peter Dam, Rodrigo Pinheiro, Thiago Motta, Manuel E. Loaiza, A. Raposo
Throughout the years many studies have explored the potential of Virtual Reality (VR) technologies to support collaborative work. However few studies looked into CSCW (Computer Supported Cooperative Work) collaboration models that could help VR systems improve the support for collaborative tasks. This paper analyzes the applicability of the 3C collaboration model as a methodology to model and define collaborative tools in the development of a collaborative virtual reality application. A case study will be presented to illustrate the selection and evaluation of different tools that aim to support the actions of communication, cooperation and coordination between users that interact in a virtual environment. The main objective of this research is to show that the criteria defined by the 3C model can be mapped as a parameter for the classification of interactive tools used in the development of collaborative virtual environments.
多年来,许多研究探索了虚拟现实(VR)技术支持协同工作的潜力。然而,很少有研究关注CSCW(计算机支持的协同工作)协作模型,该模型可以帮助VR系统提高对协作任务的支持。本文分析了3C协作模型在协同虚拟现实应用开发中作为一种建模和定义协作工具的方法的适用性。将提出一个案例研究,以说明选择和评估不同的工具,旨在支持在虚拟环境中交互的用户之间的通信,合作和协调的行动。本研究的主要目的是表明3C模型定义的标准可以映射为协作虚拟环境开发中使用的交互工具分类的参数。
{"title":"A Case Study on the Implementation of the 3C Collaboration Model in Virtual Environments","authors":"Daniel Medeiros, E. R. Silva, Peter Dam, Rodrigo Pinheiro, Thiago Motta, Manuel E. Loaiza, A. Raposo","doi":"10.1109/SVR.2012.28","DOIUrl":"https://doi.org/10.1109/SVR.2012.28","url":null,"abstract":"Throughout the years many studies have explored the potential of Virtual Reality (VR) technologies to support collaborative work. However few studies looked into CSCW (Computer Supported Cooperative Work) collaboration models that could help VR systems improve the support for collaborative tasks. This paper analyzes the applicability of the 3C collaboration model as a methodology to model and define collaborative tools in the development of a collaborative virtual reality application. A case study will be presented to illustrate the selection and evaluation of different tools that aim to support the actions of communication, cooperation and coordination between users that interact in a virtual environment. The main objective of this research is to show that the criteria defined by the 3C model can be mapped as a parameter for the classification of interactive tools used in the development of collaborative virtual environments.","PeriodicalId":319713,"journal":{"name":"2012 14th Symposium on Virtual and Augmented Reality","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123132754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
AR-based Video-Mediated Communication: A Social Presence Enhancing Experience 基于ar的视频媒介交流:一种增强社交存在的体验
Pub Date : 2012-05-28 DOI: 10.1109/SVR.2012.4
I. Almeida, Marina Atsumi Oikawa, Jordi Polo Carres, Jun Miyazaki, H. Kato, M. Billinghurst
Video-mediated communication systems attempt toprovide users with a channel that could bring out the "feeling"of face-to-face communication. Among the many qualities thesesystems aim for, a high level of Social Presence isunquestionably a desirable one; however, little effort has beenmade to improve upon the user's perception of "presence". Wepropose an AR approach to enhance social presence for video mediatedsystems by allowing one user to be present in theother user's video image. We conducted a preliminary pilotstudy with 10 participants coupled in 5 pairs to evaluate oursystem and compare with the traditional video-chat setup.Results indicated that our system has higher degree of socialpresence compared to traditional video-chat systems. Thisconclusion was supported by the positive feedback from thesubjects.
以视频为媒介的交流系统试图为用户提供一种可以带来面对面交流“感觉”的渠道。在这些系统所追求的众多品质中,高水平的社会存在无疑是可取的;然而,在改善用户对“存在感”的感知方面,几乎没有做出任何努力。我们提出了一种增强现实方法,通过允许一个用户出现在另一个用户的视频图像中来增强视频中介系统的社交存在感。我们进行了一个初步的试点研究,有10名参与者分成5对来评估我们的系统,并与传统的视频聊天设置进行比较。结果表明,与传统的视频聊天系统相比,我们的系统具有更高的社交存在度。这一结论得到了受试者积极反馈的支持。
{"title":"AR-based Video-Mediated Communication: A Social Presence Enhancing Experience","authors":"I. Almeida, Marina Atsumi Oikawa, Jordi Polo Carres, Jun Miyazaki, H. Kato, M. Billinghurst","doi":"10.1109/SVR.2012.4","DOIUrl":"https://doi.org/10.1109/SVR.2012.4","url":null,"abstract":"Video-mediated communication systems attempt toprovide users with a channel that could bring out the \"feeling\"of face-to-face communication. Among the many qualities thesesystems aim for, a high level of Social Presence isunquestionably a desirable one; however, little effort has beenmade to improve upon the user's perception of \"presence\". Wepropose an AR approach to enhance social presence for video mediatedsystems by allowing one user to be present in theother user's video image. We conducted a preliminary pilotstudy with 10 participants coupled in 5 pairs to evaluate oursystem and compare with the traditional video-chat setup.Results indicated that our system has higher degree of socialpresence compared to traditional video-chat systems. Thisconclusion was supported by the positive feedback from thesubjects.","PeriodicalId":319713,"journal":{"name":"2012 14th Symposium on Virtual and Augmented Reality","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133539433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
A Peer-to-Peer Multicast Architecture for Supporting Collaborative Virtual Environments (CVEs) in Medicine 医学中支持协同虚拟环境(CVEs)的点对点多播架构
Pub Date : 2012-05-28 DOI: 10.1109/SVR.2012.7
P. V. F. Paiva, L. Machado, J. Oliveira
Collaborative Virtual Environments (CVEs) can improve the way remote users interact with one another while learning or training skills on a given task. One CVE's application is the possibility of simulating medical procedures in which a group of remote users can train and interact simultaneously. It is important that networking issues and performance evaluation of CVEs allows us to understand how such systems can work in the Internet, as well as the requirements for multisensorial and real-time data. Thus, this paper discloses implementation issues of a peer-to-peer multicast network architecture on the collaborative module of the CyberMed VR framework. The multicast protocol is known to provide better scalability and decrease the use of bandwidth on CVEs, allowing better Quality of Experience (QoE). Finally it presents the results of a performance evaluation experiment.
协作虚拟环境(cve)可以改善远程用户在学习或培训特定任务技能时彼此交互的方式。CVE的一个应用是模拟医疗程序的可能性,其中一组远程用户可以同时进行培训和交互。重要的是,cve的网络问题和性能评估使我们能够了解此类系统如何在互联网上工作,以及对多传感器和实时数据的要求。因此,本文揭示了一种点对点多播网络架构在CyberMed VR框架协同模块上的实现问题。已知多播协议可以提供更好的可伸缩性并减少cve上的带宽使用,从而实现更好的体验质量(QoE)。最后给出了性能评估实验的结果。
{"title":"A Peer-to-Peer Multicast Architecture for Supporting Collaborative Virtual Environments (CVEs) in Medicine","authors":"P. V. F. Paiva, L. Machado, J. Oliveira","doi":"10.1109/SVR.2012.7","DOIUrl":"https://doi.org/10.1109/SVR.2012.7","url":null,"abstract":"Collaborative Virtual Environments (CVEs) can improve the way remote users interact with one another while learning or training skills on a given task. One CVE's application is the possibility of simulating medical procedures in which a group of remote users can train and interact simultaneously. It is important that networking issues and performance evaluation of CVEs allows us to understand how such systems can work in the Internet, as well as the requirements for multisensorial and real-time data. Thus, this paper discloses implementation issues of a peer-to-peer multicast network architecture on the collaborative module of the CyberMed VR framework. The multicast protocol is known to provide better scalability and decrease the use of bandwidth on CVEs, allowing better Quality of Experience (QoE). Finally it presents the results of a performance evaluation experiment.","PeriodicalId":319713,"journal":{"name":"2012 14th Symposium on Virtual and Augmented Reality","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123544305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Integration Framework of Augmented Reality and Tangible Interfaces for Enhancing the User Interaction 增强现实与有形界面的集成框架,增强用户交互
Pub Date : 2012-05-28 DOI: 10.1109/SVR.2012.13
Fábio Rodrigues, F. Sato, L. C. Botega, Allan Oliveira
The integration of post-wimp computer interfaces arises as an alternative to meet the individual limitations of each modality, considering both interaction components and the feedbacks to users. Tangible interfaces can present restrictions referring to physical space on tabletop architectures, which limits the manipulation of objects and deprecates the interactive process. Hence, this paper proposes the integration of techniques of mobile Augmented Reality with tabletop tangible architecture for blending real and virtual components on its surface, aiming to make the interactive process richer, seamless and more complete.
考虑到交互组件和对用户的反馈,后wimp计算机接口的集成作为满足每种模式的个别限制的替代方案而出现。有形接口可能会对桌面架构上的物理空间造成限制,这限制了对对象的操作,并不利于交互过程。因此,本文提出将移动增强现实技术与桌面有形架构相结合,在其表面混合真实和虚拟组件,使交互过程更加丰富、无缝和完整。
{"title":"Integration Framework of Augmented Reality and Tangible Interfaces for Enhancing the User Interaction","authors":"Fábio Rodrigues, F. Sato, L. C. Botega, Allan Oliveira","doi":"10.1109/SVR.2012.13","DOIUrl":"https://doi.org/10.1109/SVR.2012.13","url":null,"abstract":"The integration of post-wimp computer interfaces arises as an alternative to meet the individual limitations of each modality, considering both interaction components and the feedbacks to users. Tangible interfaces can present restrictions referring to physical space on tabletop architectures, which limits the manipulation of objects and deprecates the interactive process. Hence, this paper proposes the integration of techniques of mobile Augmented Reality with tabletop tangible architecture for blending real and virtual components on its surface, aiming to make the interactive process richer, seamless and more complete.","PeriodicalId":319713,"journal":{"name":"2012 14th Symposium on Virtual and Augmented Reality","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127390326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Real Time Ray Tracing for Augmented Reality 增强现实的实时光线追踪
Pub Date : 2012-05-28 DOI: 10.1109/SVR.2012.8
A. Santos, Diego Lemos, Jorge Eduardo Falcao Lindoso, V. Teichrieb
This paper introduces a novel graphics rendering pipeline applied to augmented reality, based on a real time ray tracing paradigm. Ray tracing techniques process pixels independently from each other, allowing an easy integration with image-based tracking techniques, contrary to traditional projection-based rasterization graphics systems, e.g. OpenGL. Therefore, by associating our highly optimized ray tracer with an augmented reality framework, the proposed pipeline is capable to provide high quality rendering with real time interaction between virtual and real objects, such as occlusions, soft shadows, custom shaders, reflections and self-reflections, some of these features only available in our rendering pipeline. As proof of concept, we present a case study with the ARToolKitPlus library and the Microsoft Kinect hardware, both integrated in our pipeline. Furthermore, we show the performance and visual results in high definition of the novel pipeline on modern graphics cards, presenting occlusion and recursive reflection effects between virtual and real objects without the latter ones needing to be previously modeled when using Kinect. Furthermore, an adaptive soft shadow sampling algorithm for ray tracing is presented, generating high quality shadows in real time for most scenes.
本文介绍了一种应用于增强现实的基于实时光线追踪范式的图形渲染管道。光线追踪技术彼此独立地处理像素,允许与基于图像的跟踪技术轻松集成,这与传统的基于投影的光栅化图形系统相反,例如OpenGL。因此,通过将我们高度优化的光线追踪器与增强现实框架相关联,所提出的管道能够提供高质量的渲染,并在虚拟和真实对象之间进行实时交互,例如遮挡,软阴影,自定义着色器,反射和自反射,其中一些功能仅在我们的渲染管道中可用。作为概念的证明,我们展示了ARToolKitPlus库和微软Kinect硬件的案例研究,两者都集成在我们的管道中。此外,我们在现代显卡上展示了高清晰度新管道的性能和视觉结果,在虚拟和真实物体之间呈现遮挡和递归反射效果,而不需要在使用Kinect时预先建模后者。在此基础上,提出了一种用于光线跟踪的自适应软阴影采样算法,能够实时生成高质量的阴影。
{"title":"Real Time Ray Tracing for Augmented Reality","authors":"A. Santos, Diego Lemos, Jorge Eduardo Falcao Lindoso, V. Teichrieb","doi":"10.1109/SVR.2012.8","DOIUrl":"https://doi.org/10.1109/SVR.2012.8","url":null,"abstract":"This paper introduces a novel graphics rendering pipeline applied to augmented reality, based on a real time ray tracing paradigm. Ray tracing techniques process pixels independently from each other, allowing an easy integration with image-based tracking techniques, contrary to traditional projection-based rasterization graphics systems, e.g. OpenGL. Therefore, by associating our highly optimized ray tracer with an augmented reality framework, the proposed pipeline is capable to provide high quality rendering with real time interaction between virtual and real objects, such as occlusions, soft shadows, custom shaders, reflections and self-reflections, some of these features only available in our rendering pipeline. As proof of concept, we present a case study with the ARToolKitPlus library and the Microsoft Kinect hardware, both integrated in our pipeline. Furthermore, we show the performance and visual results in high definition of the novel pipeline on modern graphics cards, presenting occlusion and recursive reflection effects between virtual and real objects without the latter ones needing to be previously modeled when using Kinect. Furthermore, an adaptive soft shadow sampling algorithm for ray tracing is presented, generating high quality shadows in real time for most scenes.","PeriodicalId":319713,"journal":{"name":"2012 14th Symposium on Virtual and Augmented Reality","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121790406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
FleXLIBRAS: Description and Animation of Signs in Brazilian Sign Language FleXLIBRAS:描述和巴西手语的标志动画
Pub Date : 2012-05-01 DOI: 10.1109/SVR.2012.25
D. A. N. S. Silva, T. Araújo, L. Dantas, Yúrika Sato Nóbrega, H. R. G. Lima, Guido Lemos de Souza Filho
Deaf communicate naturally through gestural and visual languages called sign languages. These languages are natural, composed by lexical items called signs and have their own vocabulary and grammar. In this paper, we propose the definition of a formal, expressive and consistent language to describe signs in Brazilian Sign Language (LIBRAS). This language allows the definition of all parameters of a sign and consequently the generation of an animation for this sign. In addition, the proposed language is flexible in the sense that new parameters (or phonemes) can be defined “on the fly”. In order to provide a case study for the proposed language, a system for collaborative construction of a LIBRAS vocabulary based on 3D humanoids avatars was also developed. Some tests with Brazilian deaf users were also performed to evaluate the proposal.
聋人自然地通过称为手语的手势和视觉语言进行交流。这些语言是自然的,由称为符号的词汇组成,有自己的词汇和语法。在本文中,我们提出了巴西手语(LIBRAS)中描述符号的形式化、表达性和一致性语言的定义。这种语言允许定义一个标志的所有参数,从而为这个标志生成动画。此外,所建议的语言在可以“动态”定义新参数(或音素)的意义上是灵活的。为了为提出的语言提供一个案例研究,还开发了一个基于三维人形化身的LIBRAS词汇协同构建系统。还对巴西聋哑人进行了一些测试,以评估该提案。
{"title":"FleXLIBRAS: Description and Animation of Signs in Brazilian Sign Language","authors":"D. A. N. S. Silva, T. Araújo, L. Dantas, Yúrika Sato Nóbrega, H. R. G. Lima, Guido Lemos de Souza Filho","doi":"10.1109/SVR.2012.25","DOIUrl":"https://doi.org/10.1109/SVR.2012.25","url":null,"abstract":"Deaf communicate naturally through gestural and visual languages called sign languages. These languages are natural, composed by lexical items called signs and have their own vocabulary and grammar. In this paper, we propose the definition of a formal, expressive and consistent language to describe signs in Brazilian Sign Language (LIBRAS). This language allows the definition of all parameters of a sign and consequently the generation of an animation for this sign. In addition, the proposed language is flexible in the sense that new parameters (or phonemes) can be defined “on the fly”. In order to provide a case study for the proposed language, a system for collaborative construction of a LIBRAS vocabulary based on 3D humanoids avatars was also developed. Some tests with Brazilian deaf users were also performed to evaluate the proposal.","PeriodicalId":319713,"journal":{"name":"2012 14th Symposium on Virtual and Augmented Reality","volume":"215 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114849433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2012 14th Symposium on Virtual and Augmented Reality
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1