首页 > 最新文献

Proceedings of the 2nd ACM symposium on Spatial user interaction最新文献

英文 中文
Designing the user in user interfaces 在用户界面中设计用户
Pub Date : 2014-10-05 DOI: 10.1145/2659766.2642919
M. Bolas
In the good old days, the human was here, the computer there, and a good living was to be made by designing ways to interface between the two. Now we find ourselves unthinkingly pinching to zoom in on a picture in a paper magazine. User interfaces are changing instinctual human behavior and instinctual human behavior is changing user interfaces. We point or look left in the "virtual" world just as we point or look left in the physical. It is clear that nothing is clear anymore: the need for "interface" vanishes when the boundaries between the physical and the virtual disappear. We are at a watershed moment when to experience being human means to experience being machine. When there is not a user interface - it is just what you do. When instinct supplants mice and menus and the interface insinuates itself into the human psyche. We are redefining and creating what it means to be human in this new physical/virtual integrated reality - we are not just designing user interfaces, we are designing users.
在过去的美好时光里,人在这里,电脑在那里,通过设计两者之间的接口,人们可以过上好日子。现在,我们发现自己会不假思索地把纸质杂志上的图片放大。用户界面正在改变人类的本能行为,而人类的本能行为也在改变用户界面。我们在“虚拟”世界中指向或向左看,就像我们在现实世界中指向或向左看一样。很明显,什么都不清楚了:当物理和虚拟之间的界限消失时,对“接口”的需求就消失了。我们正处于一个分水岭时刻,体验人类意味着体验机器。当没有用户界面时,这就是你所做的。当本能取代了鼠标和菜单,界面就会潜移默化地进入人类的心灵。我们正在重新定义和创造人类在这个新的物理/虚拟集成现实中的意义——我们不只是在设计用户界面,我们在设计用户。
{"title":"Designing the user in user interfaces","authors":"M. Bolas","doi":"10.1145/2659766.2642919","DOIUrl":"https://doi.org/10.1145/2659766.2642919","url":null,"abstract":"In the good old days, the human was here, the computer there, and a good living was to be made by designing ways to interface between the two. Now we find ourselves unthinkingly pinching to zoom in on a picture in a paper magazine. User interfaces are changing instinctual human behavior and instinctual human behavior is changing user interfaces. We point or look left in the \"virtual\" world just as we point or look left in the physical. It is clear that nothing is clear anymore: the need for \"interface\" vanishes when the boundaries between the physical and the virtual disappear. We are at a watershed moment when to experience being human means to experience being machine. When there is not a user interface - it is just what you do. When instinct supplants mice and menus and the interface insinuates itself into the human psyche. We are redefining and creating what it means to be human in this new physical/virtual integrated reality - we are not just designing user interfaces, we are designing users.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126114710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Exploring gestural interaction in smart spaces using head mounted devices with ego-centric sensing 利用以自我为中心的传感头戴式设备探索智能空间中的手势交互
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2659781
Barry Kollee, Sven G. Kratz, Anthony Dunnigan
It is now possible to develop head-mounted devices (HMDs) that allow for ego-centric sensing of mid-air gestural input. Therefore, we explore the use of HMD-based gestural input techniques in smart space environments. We developed a usage scenario to evaluate HMD-based gestural interactions and conducted a user study to elicit qualitative feedback on several HMD-based gestural input techniques. Our results show that for the proposed scenario, mid-air hand gestures are preferred to head gestures for input and rated more favorably compared to non-gestural input techniques available on existing HMDs. Informed by these study results, we developed a prototype HMD system that supports gestural interactions as proposed in our scenario. We conducted a second user study to quantitatively evaluate our prototype comparing several gestural and non-gestural input techniques. The results of this study show no clear advantage or disadvantage of gestural inputs vs.~non-gestural input techniques on HMDs. We did find that voice control as (sole) input modality performed worst compared to the other input techniques we evaluated. Lastly, we present two further applications implemented with our system, demonstrating 3D scene viewing and ambient light control. We conclude by briefly discussing the implications of ego-centric vs. exo-centric tracking for interaction in smart spaces.
现在有可能开发出头戴式设备(hmd),允许以自我为中心的空中手势输入感知。因此,我们探索在智能空间环境中使用基于hmd的手势输入技术。我们开发了一个使用场景来评估基于hmd的手势交互,并进行了一项用户研究,以获得几种基于hmd的手势输入技术的定性反馈。我们的研究结果表明,在提出的场景中,空中手势比头部手势更适合用于输入,并且与现有头戴式显示器上可用的非手势输入技术相比,其评分更高。根据这些研究结果,我们开发了一个原型HMD系统,支持在我们的场景中提出的手势交互。我们进行了第二次用户研究,以定量评估我们的原型,比较几种手势和非手势输入技术。本研究的结果显示手势输入与非手势输入技术在头戴设备上没有明显的优势或劣势。我们确实发现语音控制作为(唯一)输入方式与我们评估的其他输入技术相比表现最差。最后,我们展示了用我们的系统实现的两个进一步的应用,演示了3D场景观看和环境光控制。最后,我们简要讨论了以自我为中心与以外为中心的跟踪对智能空间中交互的影响。
{"title":"Exploring gestural interaction in smart spaces using head mounted devices with ego-centric sensing","authors":"Barry Kollee, Sven G. Kratz, Anthony Dunnigan","doi":"10.1145/2659766.2659781","DOIUrl":"https://doi.org/10.1145/2659766.2659781","url":null,"abstract":"It is now possible to develop head-mounted devices (HMDs) that allow for ego-centric sensing of mid-air gestural input. Therefore, we explore the use of HMD-based gestural input techniques in smart space environments. We developed a usage scenario to evaluate HMD-based gestural interactions and conducted a user study to elicit qualitative feedback on several HMD-based gestural input techniques. Our results show that for the proposed scenario, mid-air hand gestures are preferred to head gestures for input and rated more favorably compared to non-gestural input techniques available on existing HMDs. Informed by these study results, we developed a prototype HMD system that supports gestural interactions as proposed in our scenario. We conducted a second user study to quantitatively evaluate our prototype comparing several gestural and non-gestural input techniques. The results of this study show no clear advantage or disadvantage of gestural inputs vs.~non-gestural input techniques on HMDs. We did find that voice control as (sole) input modality performed worst compared to the other input techniques we evaluated. Lastly, we present two further applications implemented with our system, demonstrating 3D scene viewing and ambient light control. We conclude by briefly discussing the implications of ego-centric vs. exo-centric tracking for interaction in smart spaces.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125580362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Making VR work: building a real-world immersive modeling application in the virtual world 让VR工作:在虚拟世界中构建一个真实世界的沉浸式建模应用程序
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2659780
M. Mine, A. Yoganandan, D. Coffey
Building a real-world immersive 3D modeling application is hard. In spite of the many supposed advantages of working in the virtual world, users quickly tire of waving their arms about and the resulting models remain simplistic at best. The dream of creation at the speed of thought has largely remained unfulfilled due to numerous factors such as the lack of suitable menu and system controls, inability to perform precise manipulations, lack of numeric input, challenges with ergonomics, and difficulties with maintaining user focus and preserving immersion. The focus of our research is on the building of virtual world applications that can go beyond the demo and can be used to do real-world work. The goal is to develop interaction techniques that support the richness and complexity required to build complex 3D models, yet minimize expenditure of user energy and maximize user comfort. We present an approach that combines the natural and intuitive power of VR interaction, the precision and control of 2D touch surfaces, and the richness of a commercial modeling package. We also discuss the benefits of collocating 2D touch with 3D bimanual spatial input, the challenges in designing a custom controller targeted at achieving the same, and the new avenues that this collocation creates.
构建一个真实世界的沉浸式3D建模应用程序是困难的。尽管在虚拟世界中工作有许多好处,但用户很快就会厌倦挥舞手臂,由此产生的模型充其量也只是简单的。由于缺乏合适的菜单和系统控制、无法执行精确操作、缺乏数字输入、人体工程学挑战、难以保持用户注意力和沉浸感等诸多因素,以思维速度创造的梦想在很大程度上仍未实现。我们的研究重点是构建虚拟世界应用程序,这些应用程序可以超越演示,并可用于现实世界的工作。目标是开发支持构建复杂3D模型所需的丰富性和复杂性的交互技术,同时最大限度地减少用户能量的消耗并最大限度地提高用户的舒适度。我们提出了一种方法,结合了VR交互的自然和直观的力量,2D触摸表面的精度和控制,以及商业建模包的丰富性。我们还讨论了将2D触控与3D双手空间输入相结合的好处,设计定制控制器所面临的挑战,以及这种搭配所创造的新途径。
{"title":"Making VR work: building a real-world immersive modeling application in the virtual world","authors":"M. Mine, A. Yoganandan, D. Coffey","doi":"10.1145/2659766.2659780","DOIUrl":"https://doi.org/10.1145/2659766.2659780","url":null,"abstract":"Building a real-world immersive 3D modeling application is hard. In spite of the many supposed advantages of working in the virtual world, users quickly tire of waving their arms about and the resulting models remain simplistic at best. The dream of creation at the speed of thought has largely remained unfulfilled due to numerous factors such as the lack of suitable menu and system controls, inability to perform precise manipulations, lack of numeric input, challenges with ergonomics, and difficulties with maintaining user focus and preserving immersion. The focus of our research is on the building of virtual world applications that can go beyond the demo and can be used to do real-world work. The goal is to develop interaction techniques that support the richness and complexity required to build complex 3D models, yet minimize expenditure of user energy and maximize user comfort. We present an approach that combines the natural and intuitive power of VR interaction, the precision and control of 2D touch surfaces, and the richness of a commercial modeling package. We also discuss the benefits of collocating 2D touch with 3D bimanual spatial input, the challenges in designing a custom controller targeted at achieving the same, and the new avenues that this collocation creates.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126781924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Measurements of operating time in first and third person views using video see-through HMD 测量操作时间在第一和第三人称视图使用视频透视HMD
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2661204
T. Koike
We measured the operation times of two tasks using video a transparent video head mounted display (HMD) in first and third person views.
我们使用透明头戴式显示器(HMD)在第一人称和第三人称视角下测量了两个任务的操作时间。
{"title":"Measurements of operating time in first and third person views using video see-through HMD","authors":"T. Koike","doi":"10.1145/2659766.2661204","DOIUrl":"https://doi.org/10.1145/2659766.2661204","url":null,"abstract":"We measured the operation times of two tasks using video a transparent video head mounted display (HMD) in first and third person views.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126209410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Real-time sign language recognition using RGBD stream: spatial-temporal feature exploration 基于RGBD流的实时手语识别:时空特征探索
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2661214
Fuyang Huang, Zelong Sun, Q. Xu, F. Sze, Tang Wai Lan, Xiaogang Wang
We propose a novel spatial-temporal feature set for sign language recognition, wherein we construct explicit spatial and temporal features that capture both hand movement and hand shape. Experimental results show that the proposed solution outperforms existing one in terms of accuracy.
我们提出了一种新的时空特征集用于手语识别,其中我们构建了明确的时空特征来捕捉手部运动和手部形状。实验结果表明,该方法在精度上优于现有方法。
{"title":"Real-time sign language recognition using RGBD stream: spatial-temporal feature exploration","authors":"Fuyang Huang, Zelong Sun, Q. Xu, F. Sze, Tang Wai Lan, Xiaogang Wang","doi":"10.1145/2659766.2661214","DOIUrl":"https://doi.org/10.1145/2659766.2661214","url":null,"abstract":"We propose a novel spatial-temporal feature set for sign language recognition, wherein we construct explicit spatial and temporal features that capture both hand movement and hand shape. Experimental results show that the proposed solution outperforms existing one in terms of accuracy.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121788488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
RUIS: a toolkit for developing virtual reality applications with spatial interaction RUIS:一个用于开发具有空间交互的虚拟现实应用程序的工具包
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2659774
Tuukka M. Takala
We introduce Reality-based User Interface System (RUIS), a virtual reality (VR) toolkit aimed for students and hobbyists, which we have used in an annually organized VR course for the past four years. RUIS toolkit provides 3D user interface building blocks for creating immersive VR applications with spatial interaction and stereo 3D graphics, while supporting affordable VR peripherals like Kinect, PlayStation Move, Razer Hydra, and Oculus Rift. We describe a novel spatial interaction scheme that combines freeform, full-body interaction with traditional video game locomotion, which can be easily implemented with RUIS. We also discuss the specific challenges associated with developing VR applications, and how they relate to the design principles behind RUIS. Finally, we validate our toolkit by comparing development difficulties experienced by users of different software toolkits, and by presenting several VR applications created with RUIS, demonstrating a variety of spatial user interfaces that it can produce.
我们介绍了基于现实的用户界面系统(RUIS),这是一个针对学生和业余爱好者的虚拟现实(VR)工具包,我们在过去四年的年度组织VR课程中使用了它。RUIS工具包提供3D用户界面构建块,用于创建具有空间交互和立体3D图形的沉浸式VR应用程序,同时支持价格合理的VR外设,如Kinect, PlayStation Move, Razer Hydra和Oculus Rift。我们描述了一种新的空间交互方案,该方案将自由形式的全身交互与传统的视频游戏运动相结合,可以很容易地通过RUIS实现。我们还讨论了与开发VR应用程序相关的具体挑战,以及它们如何与RUIS背后的设计原则相关联。最后,我们通过比较不同软件工具包的用户所经历的开发困难来验证我们的工具包,并通过展示用RUIS创建的几个VR应用程序,展示它可以产生的各种空间用户界面。
{"title":"RUIS: a toolkit for developing virtual reality applications with spatial interaction","authors":"Tuukka M. Takala","doi":"10.1145/2659766.2659774","DOIUrl":"https://doi.org/10.1145/2659766.2659774","url":null,"abstract":"We introduce Reality-based User Interface System (RUIS), a virtual reality (VR) toolkit aimed for students and hobbyists, which we have used in an annually organized VR course for the past four years. RUIS toolkit provides 3D user interface building blocks for creating immersive VR applications with spatial interaction and stereo 3D graphics, while supporting affordable VR peripherals like Kinect, PlayStation Move, Razer Hydra, and Oculus Rift. We describe a novel spatial interaction scheme that combines freeform, full-body interaction with traditional video game locomotion, which can be easily implemented with RUIS. We also discuss the specific challenges associated with developing VR applications, and how they relate to the design principles behind RUIS. Finally, we validate our toolkit by comparing development difficulties experienced by users of different software toolkits, and by presenting several VR applications created with RUIS, demonstrating a variety of spatial user interfaces that it can produce.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129974582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
HoloLeap: towards efficient 3D object manipulation on light field displays HoloLeap:在光场显示器上实现高效的3D对象操作
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2661223
V. K. Adhikarla, Paweł W. Woźniak, Robert J. Teather
We present HoloLeap, which uses a Leap Motion controller for 3D model manipulation on a light field display (LFD). Like autostereo displays, LFDs support glasses-free 3D viewing. Unlike autostereo displays, LFDs automatically accommodate multiple viewpoints without the need of additional tracking equipment. We describe a gesture-based object manipulation that enables manipulation of 3D objects with 7DOFs by leveraging natural and familiar gestures. We provide an overview of research questions aimed at optimizing gestural input on light field displays.
我们提出了HoloLeap,它使用Leap Motion控制器在光场显示器(LFD)上进行3D模型操作。与自动立体显示器一样,lfd支持裸眼3D观看。与自动立体显示器不同,lfd自动适应多个视点,而不需要额外的跟踪设备。我们描述了一种基于手势的对象操作,通过利用自然和熟悉的手势,可以操纵具有7dfs的3D对象。我们提供了旨在优化光场显示的手势输入的研究问题的概述。
{"title":"HoloLeap: towards efficient 3D object manipulation on light field displays","authors":"V. K. Adhikarla, Paweł W. Woźniak, Robert J. Teather","doi":"10.1145/2659766.2661223","DOIUrl":"https://doi.org/10.1145/2659766.2661223","url":null,"abstract":"We present HoloLeap, which uses a Leap Motion controller for 3D model manipulation on a light field display (LFD). Like autostereo displays, LFDs support glasses-free 3D viewing. Unlike autostereo displays, LFDs automatically accommodate multiple viewpoints without the need of additional tracking equipment. We describe a gesture-based object manipulation that enables manipulation of 3D objects with 7DOFs by leveraging natural and familiar gestures. We provide an overview of research questions aimed at optimizing gestural input on light field displays.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124559863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Augmenting views on large format displays with tablets 用平板电脑增强大格式显示器上的视图
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2661227
Phil Lindner, Adolfo Rodriguez, T. Uram, M. Papka
Large format displays are commonplace for viewing large scientific datasets. These displays often find their way into collaborative spaces, allowing for multiple individuals to be collocated with the display, though multi-modal interaction with the displayed content remains a challenge. We have begun development of a tablet-based interaction mode for use with large format displays to augment these workspaces.
大格式显示器用于查看大型科学数据集是很常见的。这些显示通常会进入协作空间,允许多个个体与显示同时使用,尽管与显示内容的多模式交互仍然是一个挑战。我们已经开始开发一种基于平板电脑的交互模式,用于大格式显示,以增强这些工作区。
{"title":"Augmenting views on large format displays with tablets","authors":"Phil Lindner, Adolfo Rodriguez, T. Uram, M. Papka","doi":"10.1145/2659766.2661227","DOIUrl":"https://doi.org/10.1145/2659766.2661227","url":null,"abstract":"Large format displays are commonplace for viewing large scientific datasets. These displays often find their way into collaborative spaces, allowing for multiple individuals to be collocated with the display, though multi-modal interaction with the displayed content remains a challenge. We have begun development of a tablet-based interaction mode for use with large format displays to augment these workspaces.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114836935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Depth cues and mouse-based 3D target selection 深度线索和基于鼠标的3D目标选择
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2661221
Robert J. Teather, W. Stuerzlinger
We investigated mouse-based 3D selection using one-eyed cursors, evaluating stereo and head-tracking. Stereo cursors significantly reduced performance for targets at different depths, but the one-eyed cursor yielded some discomfort.
我们使用单眼光标研究了基于鼠标的3D选择,评估了立体和头部跟踪。立体光标对于不同深度的目标会显著降低性能,但独眼光标会产生一些不适。
{"title":"Depth cues and mouse-based 3D target selection","authors":"Robert J. Teather, W. Stuerzlinger","doi":"10.1145/2659766.2661221","DOIUrl":"https://doi.org/10.1145/2659766.2661221","url":null,"abstract":"We investigated mouse-based 3D selection using one-eyed cursors, evaluating stereo and head-tracking. Stereo cursors significantly reduced performance for targets at different depths, but the one-eyed cursor yielded some discomfort.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116897570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Projection augmented physical visualizations 投影增强物理可视化
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2661210
Simon Stusak, M. Teufel
Physical visualizations are an emergent area of research and appear in increasingly diverse forms. While they provide an engaging way of data exploration, they are often limited by a fixed representation and lack interactivity. In this work we discuss our early approaches and experiences in combining physical visualizations with spatial augmented reality and present an initial prototype.
物理可视化是一个新兴的研究领域,呈现出越来越多样化的形式。虽然它们提供了一种引人入胜的数据探索方式,但它们通常受到固定表示的限制,并且缺乏交互性。在这项工作中,我们讨论了我们将物理可视化与空间增强现实相结合的早期方法和经验,并提出了一个初始原型。
{"title":"Projection augmented physical visualizations","authors":"Simon Stusak, M. Teufel","doi":"10.1145/2659766.2661210","DOIUrl":"https://doi.org/10.1145/2659766.2661210","url":null,"abstract":"Physical visualizations are an emergent area of research and appear in increasingly diverse forms. While they provide an engaging way of data exploration, they are often limited by a fixed representation and lack interactivity. In this work we discuss our early approaches and experiences in combining physical visualizations with spatial augmented reality and present an initial prototype.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129793741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Proceedings of the 2nd ACM symposium on Spatial user interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1