首页 > 最新文献

Proceedings of the 2nd ACM symposium on Spatial user interaction最新文献

英文 中文
The significance of stereopsis and motion parallax in mobile head tracking environments 立体视觉和运动视差在移动头部跟踪环境中的意义
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2661220
Paul Lubos, Dimitar Valkov
Despite 3D TVs and applications gaining popularity in recent years, 3D displays on mobile devices are rare. With low-cost head tracking solutions and first user interfaces available on smartphones, the question arises how effective the 3D impression through motion-parallax is and whether it is possible to achieve viable depth perception without binocular stereo cues. As motion parallax and stereopsis may be considered the most important depth cues, we developed an experiment comparing the user's depth perception utilizing head tracking with and without stereopsis.
尽管近年来3D电视和应用程序越来越受欢迎,但移动设备上的3D显示器却很少。随着低成本的头部跟踪解决方案和智能手机上的第一个用户界面的出现,问题出现了,通过运动视差产生的3D印象有多有效,以及是否有可能在没有双目立体提示的情况下实现可行的深度感知。由于运动视差和立体视觉可能被认为是最重要的深度线索,我们开发了一个实验,比较用户在使用和不使用立体视觉的情况下使用头部跟踪的深度感知。
{"title":"The significance of stereopsis and motion parallax in mobile head tracking environments","authors":"Paul Lubos, Dimitar Valkov","doi":"10.1145/2659766.2661220","DOIUrl":"https://doi.org/10.1145/2659766.2661220","url":null,"abstract":"Despite 3D TVs and applications gaining popularity in recent years, 3D displays on mobile devices are rare. With low-cost head tracking solutions and first user interfaces available on smartphones, the question arises how effective the 3D impression through motion-parallax is and whether it is possible to achieve viable depth perception without binocular stereo cues. As motion parallax and stereopsis may be considered the most important depth cues, we developed an experiment comparing the user's depth perception utilizing head tracking with and without stereopsis.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128889082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Safe-&-round: bringing redirected walking to small virtual reality laboratories 安全&循环:将重定向行走带到小型虚拟现实实验室
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2661219
Paul Lubos, G. Bruder, Frank Steinicke
Walking is usually considered the most natural form for self-motion in a virtual environment (VE). However, the confined physical workspace of typical virtual reality (VR) labs often prevents natural exploration of larger VEs. Redirected walking has been introduced as a potential solution to this restriction, but corresponding techniques often induce enormous manipulations if the workspace is considerably small and lacks natural experiences therefore. In this poster we propose the Safe-&-Round user interface, which supports natural walking in a potentially infinite virtual scene while confined to a considerably restricted physical workspace. This virtual locomotion technique relies on a safety volume, which is displayed as a semi-transparent half-capsule, inside which the user can walk without manipulations caused by redirected walking.
行走通常被认为是虚拟环境(VE)中最自然的自我运动形式。然而,典型的虚拟现实(VR)实验室有限的物理工作空间往往阻碍了对大型虚拟现实的自然探索。重定向行走已经作为一种潜在的解决方案被引入,但是如果工作空间相当小,因此缺乏自然体验,相应的技术通常会引起大量的操作。在这张海报中,我们提出了安全与圆形用户界面,它支持在一个潜在的无限虚拟场景中自然行走,而局限于一个相当有限的物理工作空间。这种虚拟运动技术依赖于一个安全体积,它被显示为一个半透明的半胶囊,用户可以在里面行走,而不需要重定向行走。
{"title":"Safe-&-round: bringing redirected walking to small virtual reality laboratories","authors":"Paul Lubos, G. Bruder, Frank Steinicke","doi":"10.1145/2659766.2661219","DOIUrl":"https://doi.org/10.1145/2659766.2661219","url":null,"abstract":"Walking is usually considered the most natural form for self-motion in a virtual environment (VE). However, the confined physical workspace of typical virtual reality (VR) labs often prevents natural exploration of larger VEs. Redirected walking has been introduced as a potential solution to this restriction, but corresponding techniques often induce enormous manipulations if the workspace is considerably small and lacks natural experiences therefore. In this poster we propose the Safe-&-Round user interface, which supports natural walking in a potentially infinite virtual scene while confined to a considerably restricted physical workspace. This virtual locomotion technique relies on a safety volume, which is displayed as a semi-transparent half-capsule, inside which the user can walk without manipulations caused by redirected walking.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133521285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Void shadows: multi-touch interaction with stereoscopic objects on the tabletop 虚空阴影:与桌面上的立体物体进行多点触控交互
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2659779
A. Giesler, Dimitar Valkov, K. Hinrichs
In this paper we present the Void Shadows interaction - a novel stereoscopic 3D interaction paradigm in which each virtual object casts a shadow on a touch-enabled display surface. The user can conveniently interact with such a shadow, and her actions are transferred to the associated object. Since all interactive tasks are carried out on the zero-parallax plane, there are no accommodation-convergence or related 2D/3D interaction problems, while the user is still able to "directly" manipulate objects at different 3D positions, without first having to position a cursor and to select an object. In an initial user study we have proved the applicability of the metaphor for some common tasks, and we have found that compared to in-air 3D interaction techniques the users performed up to 28% more precisely using about the same amount of time.
在本文中,我们提出了Void Shadows交互-一种新的立体3D交互范例,其中每个虚拟对象在支持触摸的显示表面上投射阴影。用户可以方便地与这样的影子交互,她的动作被转移到相关的对象上。由于所有交互任务都是在零视差平面上进行的,因此不存在调节收敛或相关的2D/3D交互问题,而用户仍然可以“直接”操作不同3D位置的对象,而无需首先定位光标并选择对象。在最初的用户研究中,我们已经证明了隐喻对一些常见任务的适用性,我们发现,与空中3D交互技术相比,用户在使用相同的时间内执行的精度提高了28%。
{"title":"Void shadows: multi-touch interaction with stereoscopic objects on the tabletop","authors":"A. Giesler, Dimitar Valkov, K. Hinrichs","doi":"10.1145/2659766.2659779","DOIUrl":"https://doi.org/10.1145/2659766.2659779","url":null,"abstract":"In this paper we present the Void Shadows interaction - a novel stereoscopic 3D interaction paradigm in which each virtual object casts a shadow on a touch-enabled display surface. The user can conveniently interact with such a shadow, and her actions are transferred to the associated object. Since all interactive tasks are carried out on the zero-parallax plane, there are no accommodation-convergence or related 2D/3D interaction problems, while the user is still able to \"directly\" manipulate objects at different 3D positions, without first having to position a cursor and to select an object. In an initial user study we have proved the applicability of the metaphor for some common tasks, and we have found that compared to in-air 3D interaction techniques the users performed up to 28% more precisely using about the same amount of time.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131356443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Ethereal planes: a design framework for 2D information space in 3D mixed reality environments 空灵的平面:三维混合现实环境中二维信息空间的设计框架
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2659769
Barrett Ens, Juan David Hincapié-Ramos, Pourang Irani
Information spaces are virtual workspaces that help us manage information by mapping it to the physical environment. This widely influential concept has been interpreted in a variety of forms, often in conjunction with mixed reality. We present Ethereal Planes, a design framework that ties together many existing variations of 2D information spaces. Ethereal Planes is aimed at assisting the design of user interfaces for next-generation technologies such as head-worn displays. From an extensive literature review, we encapsulated the common attributes of existing novel designs in seven design dimensions. Mapping the reviewed designs to the framework dimensions reveals a set of common usage patterns. We discuss how the Ethereal Planes framework can be methodically applied to help inspire new designs. We provide a concrete example of the framework's utility during the design of the Personal Cockpit, a window management system for head-worn displays.
信息空间是虚拟的工作空间,它通过将信息映射到物理环境来帮助我们管理信息。这个具有广泛影响的概念以各种形式被解释,通常与混合现实结合在一起。我们展示了以太平面,这是一个设计框架,将许多现有的二维信息空间变体联系在一起。Ethereal Planes旨在协助下一代技术(如头戴式显示器)的用户界面设计。从广泛的文献回顾中,我们将现有新颖设计的共同属性封装在七个设计维度中。将审查过的设计映射到框架维度可以揭示一组常见的使用模式。我们讨论了如何系统地应用以太平面框架来帮助激发新的设计。我们提供了一个具体的例子,说明该框架在个人座舱(一个用于头戴式显示器的窗口管理系统)设计中的实用性。
{"title":"Ethereal planes: a design framework for 2D information space in 3D mixed reality environments","authors":"Barrett Ens, Juan David Hincapié-Ramos, Pourang Irani","doi":"10.1145/2659766.2659769","DOIUrl":"https://doi.org/10.1145/2659766.2659769","url":null,"abstract":"Information spaces are virtual workspaces that help us manage information by mapping it to the physical environment. This widely influential concept has been interpreted in a variety of forms, often in conjunction with mixed reality. We present Ethereal Planes, a design framework that ties together many existing variations of 2D information spaces. Ethereal Planes is aimed at assisting the design of user interfaces for next-generation technologies such as head-worn displays. From an extensive literature review, we encapsulated the common attributes of existing novel designs in seven design dimensions. Mapping the reviewed designs to the framework dimensions reveals a set of common usage patterns. We discuss how the Ethereal Planes framework can be methodically applied to help inspire new designs. We provide a concrete example of the framework's utility during the design of the Personal Cockpit, a window management system for head-worn displays.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114082824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 82
Are 4 hands better than 2?: bimanual interaction for quadmanual user interfaces 四只手比两只手好吗?:四手用户界面的双手交互
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2659782
Paul Lubos, G. Bruder, Frank Steinicke
The design of spatial user interaction for immersive virtual environments (IVEs) is an inherently difficult task. Missing haptic feedback and spatial misperception hinder an efficient direct interaction with virtual objects. Moreover, interaction performance depends on a variety of ergonomics factors, such as the user's endurance, muscular strength, as well as fitness. However, the potential benefits of direct and natural interaction offered by IVEs encourage research to create more efficient interaction methods. We suggest a novel way of 3D interaction by utilizing the fact that for many tasks, bimanual interaction shows benefits over one-handed interaction in a confined interaction space. In this paper we push this idea even further and introduce quadmanual user interfaces (QUIs) with two additional, virtual hands. These magic hands allow the user to keep their arms in a comfortable position yet still interact with multiple virtual interaction spaces. To analyze our approach we conducted a performance experiment inspired by a Fitts' Law selection task, investigating the feasibility of our approach for the natural interaction with 3D objects in virtual space.
沉浸式虚拟环境(IVEs)的空间用户交互设计本身就是一项艰巨的任务。缺少触觉反馈和空间错觉阻碍了与虚拟对象的有效直接交互。此外,交互性能取决于多种人体工程学因素,例如用户的耐力、肌肉力量以及健康状况。然而,IVEs提供的直接和自然交互的潜在好处鼓励研究创造更有效的交互方法。我们提出了一种新颖的3D交互方式,利用这一事实,即对于许多任务,在有限的交互空间中,双手交互比单手交互更具优势。在本文中,我们进一步推动了这一想法,并引入了带有两个额外虚拟手的四手用户界面(QUIs)。这些神奇的手允许用户保持他们的手臂在一个舒适的位置,但仍然与多个虚拟交互空间进行交互。为了分析我们的方法,我们进行了一个受菲茨定律选择任务启发的性能实验,研究了我们的方法在虚拟空间中与3D物体自然交互的可行性。
{"title":"Are 4 hands better than 2?: bimanual interaction for quadmanual user interfaces","authors":"Paul Lubos, G. Bruder, Frank Steinicke","doi":"10.1145/2659766.2659782","DOIUrl":"https://doi.org/10.1145/2659766.2659782","url":null,"abstract":"The design of spatial user interaction for immersive virtual environments (IVEs) is an inherently difficult task. Missing haptic feedback and spatial misperception hinder an efficient direct interaction with virtual objects. Moreover, interaction performance depends on a variety of ergonomics factors, such as the user's endurance, muscular strength, as well as fitness. However, the potential benefits of direct and natural interaction offered by IVEs encourage research to create more efficient interaction methods. We suggest a novel way of 3D interaction by utilizing the fact that for many tasks, bimanual interaction shows benefits over one-handed interaction in a confined interaction space. In this paper we push this idea even further and introduce quadmanual user interfaces (QUIs) with two additional, virtual hands. These magic hands allow the user to keep their arms in a comfortable position yet still interact with multiple virtual interaction spaces. To analyze our approach we conducted a performance experiment inspired by a Fitts' Law selection task, investigating the feasibility of our approach for the natural interaction with 3D objects in virtual space.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"274 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133831885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Real-time and robust grasping detection 实时鲁棒抓取检测
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2661224
Chih-Fan Chen, Ryan P. Spicer, Rhys Yahata, M. Bolas, Evan A. Suma
Depth-based gesture cameras provide a promising and novel way to interface with computers. Nevertheless, this type of interaction remains challenging due to the complexity of finger interactions and the under large viewpoint variations. Existing middleware such as Intel Perceptual Computing SDK (PCSDK) or SoftKinetic IISU can provide abundant hand tracking and gesture information. However, the data is too noisy (Fig. 1, left) for consistent and reliable use in our application. In this work, we present a filtering approach that combines several features from PCSDK to achieve more stable hand openness and supports grasping interactions in virtual environments. Support vector machine (SVM), a machine learning method, is used to achieve better accuracy in a single frame, and Markov Random Field (MRF), a probability theory, is used to stabilize and smooth the sequential output. Our experimental results verify the effectiveness and the robustness of our method.
基于深度的手势相机提供了一种与计算机交互的新方法。然而,由于手指相互作用的复杂性和较小的视点变化,这种类型的交互仍然具有挑战性。现有的中间件如Intel Perceptual Computing SDK (PCSDK)或SoftKinetic IISU可以提供丰富的手部跟踪和手势信息。然而,数据噪声太大(图1,左),无法在我们的应用程序中一致和可靠地使用。在这项工作中,我们提出了一种过滤方法,该方法结合了PCSDK的几个特性,以实现更稳定的手部开放,并支持虚拟环境中的抓取交互。使用机器学习方法支持向量机(SVM)在单帧中获得更好的精度,使用概率论中的马尔可夫随机场(MRF)来稳定和平滑顺序输出。实验结果验证了该方法的有效性和鲁棒性。
{"title":"Real-time and robust grasping detection","authors":"Chih-Fan Chen, Ryan P. Spicer, Rhys Yahata, M. Bolas, Evan A. Suma","doi":"10.1145/2659766.2661224","DOIUrl":"https://doi.org/10.1145/2659766.2661224","url":null,"abstract":"Depth-based gesture cameras provide a promising and novel way to interface with computers. Nevertheless, this type of interaction remains challenging due to the complexity of finger interactions and the under large viewpoint variations. Existing middleware such as Intel Perceptual Computing SDK (PCSDK) or SoftKinetic IISU can provide abundant hand tracking and gesture information. However, the data is too noisy (Fig. 1, left) for consistent and reliable use in our application. In this work, we present a filtering approach that combines several features from PCSDK to achieve more stable hand openness and supports grasping interactions in virtual environments. Support vector machine (SVM), a machine learning method, is used to achieve better accuracy in a single frame, and Markov Random Field (MRF), a probability theory, is used to stabilize and smooth the sequential output. Our experimental results verify the effectiveness and the robustness of our method.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121982003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Augmented reality paper clay making based on hand gesture recognition 基于手势识别的增强现实纸泥制作
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2661209
P. Chiang, Wei-Yu Li
We propose a gesture-based 3D modeling system, which allows the user to create and sculpt a 3D model with hand-gestures. The goal of our system is to provide a more intuitive 3D user interface than the traditional 2D ones such as mouse or touch pad. Inspired by how people make paper clay, a series of hand gestures are designed for interacting with the 3D object and their corresponding mesh processing functions are developed. Thus, the user can create a desired virtual 3D object just like paper clay making.
我们提出了一个基于手势的三维建模系统,它允许用户用手势创建和雕刻三维模型。我们系统的目标是提供一个比传统的2D(如鼠标或触摸板)更直观的3D用户界面。受人们制作纸粘土的启发,设计了一系列与三维物体交互的手势,并开发了相应的网格处理功能。因此,用户可以创建一个所需的虚拟3D对象,就像纸粘土制作。
{"title":"Augmented reality paper clay making based on hand gesture recognition","authors":"P. Chiang, Wei-Yu Li","doi":"10.1145/2659766.2661209","DOIUrl":"https://doi.org/10.1145/2659766.2661209","url":null,"abstract":"We propose a gesture-based 3D modeling system, which allows the user to create and sculpt a 3D model with hand-gestures. The goal of our system is to provide a more intuitive 3D user interface than the traditional 2D ones such as mouse or touch pad. Inspired by how people make paper clay, a series of hand gestures are designed for interacting with the 3D object and their corresponding mesh processing functions are developed. Thus, the user can create a desired virtual 3D object just like paper clay making.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122385364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
LeapLook: a free-hand gestural travel technique using the leap motion finger tracker LeapLook:一种使用跳跃动作手指跟踪器的徒手手势旅行技术
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2661218
Robert Codd-Downey, W. Stuerzlinger
Contactless motion sensing devices enable a new form of input that does not encumber the user with wearable tracking equipment. We present a novel travel technique using the Leap Motion finger tracker which adopts a 2DOF steering metaphor used in traditional mouse and keyboard navigation in many 3D computer games.
非接触式运动传感设备提供了一种新的输入形式,不会妨碍用户使用可穿戴跟踪设备。我们提出了一种使用Leap Motion手指跟踪器的新颖旅行技术,该技术采用了许多3D电脑游戏中传统鼠标和键盘导航中使用的2DOF转向比喻。
{"title":"LeapLook: a free-hand gestural travel technique using the leap motion finger tracker","authors":"Robert Codd-Downey, W. Stuerzlinger","doi":"10.1145/2659766.2661218","DOIUrl":"https://doi.org/10.1145/2659766.2661218","url":null,"abstract":"Contactless motion sensing devices enable a new form of input that does not encumber the user with wearable tracking equipment. We present a novel travel technique using the Leap Motion finger tracker which adopts a 2DOF steering metaphor used in traditional mouse and keyboard navigation in many 3D computer games.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124858667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
AnnoScape: remote collaborative review using live video overlay in shared 3D virtual workspace AnnoScape:在共享的3D虚拟工作空间中使用实时视频覆盖的远程协作审查
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2659776
Austin S. Lee, H. Chigira, S. Tang, Kojo Acquah, H. Ishii
We introduce AnnoScape, a remote collaboration system that allows users to overlay live video of the physical desktop image on a shared 3D virtual workspace to support individual and collaborative review of 2D and 3D content using hand gestures and real ink. The AnnoScape system enables distributed users to visually navigate the shared 3D virtual workspace individually or jointly by moving tangible handles; simultaneously snap into a shared viewpoint and generate a live video overlay of freehand annotations from the desktop surface onto the system's virtual viewports which can be placed spatially in the 3D data space. Finally, we present results of our preliminary user study and discuss design issues and AnnoScape's potential to facilitate effective communication during remote 3D data reviews.
我们介绍AnnoScape,这是一个远程协作系统,允许用户将物理桌面图像的实时视频覆盖在共享的3D虚拟工作空间上,以支持使用手势和真实墨水对2D和3D内容进行个人和协作审查。AnnoScape系统使分布式用户能够通过移动有形的手柄来单独或联合地直观地导航共享的3D虚拟工作空间;同时捕捉到一个共享的视点,并从桌面表面生成一个手绘注释的实时视频覆盖到系统的虚拟视口,可以在3D数据空间中放置空间。最后,我们介绍了初步用户研究的结果,并讨论了设计问题和AnnoScape在远程3D数据审查期间促进有效沟通的潜力。
{"title":"AnnoScape: remote collaborative review using live video overlay in shared 3D virtual workspace","authors":"Austin S. Lee, H. Chigira, S. Tang, Kojo Acquah, H. Ishii","doi":"10.1145/2659766.2659776","DOIUrl":"https://doi.org/10.1145/2659766.2659776","url":null,"abstract":"We introduce AnnoScape, a remote collaboration system that allows users to overlay live video of the physical desktop image on a shared 3D virtual workspace to support individual and collaborative review of 2D and 3D content using hand gestures and real ink. The AnnoScape system enables distributed users to visually navigate the shared 3D virtual workspace individually or jointly by moving tangible handles; simultaneously snap into a shared viewpoint and generate a live video overlay of freehand annotations from the desktop surface onto the system's virtual viewports which can be placed spatially in the 3D data space. Finally, we present results of our preliminary user study and discuss design issues and AnnoScape's potential to facilitate effective communication during remote 3D data reviews.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130234418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Hidden UI: projection-based augmented reality for map navigation on multi-touch tabletop 隐藏UI:基于投影的增强现实,用于多点触控桌面的地图导航
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2661228
Seungjae Oh, Heeseung Kwon, H. So
We present the development of the interactive system integrating multi-touch tabletop and projection-based Augmented Reality (AR). The integrated system supports the flexible presentation of multiple UI components, which is suitable for multi-touch tabletop environments displaying complex information at different layers.
我们提出了集成多点触控桌面和基于投影的增强现实(AR)的交互系统的开发。集成系统支持多个UI组件的灵活呈现,适合多点触控桌面环境,在不同层次显示复杂信息。
{"title":"Hidden UI: projection-based augmented reality for map navigation on multi-touch tabletop","authors":"Seungjae Oh, Heeseung Kwon, H. So","doi":"10.1145/2659766.2661228","DOIUrl":"https://doi.org/10.1145/2659766.2661228","url":null,"abstract":"We present the development of the interactive system integrating multi-touch tabletop and projection-based Augmented Reality (AR). The integrated system supports the flexible presentation of multiple UI components, which is suitable for multi-touch tabletop environments displaying complex information at different layers.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122962857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 2nd ACM symposium on Spatial user interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1