In the good old days, the human was here, the computer there, and a good living was to be made by designing ways to interface between the two. Now we find ourselves unthinkingly pinching to zoom in on a picture in a paper magazine. User interfaces are changing instinctual human behavior and instinctual human behavior is changing user interfaces. We point or look left in the "virtual" world just as we point or look left in the physical. It is clear that nothing is clear anymore: the need for "interface" vanishes when the boundaries between the physical and the virtual disappear. We are at a watershed moment when to experience being human means to experience being machine. When there is not a user interface - it is just what you do. When instinct supplants mice and menus and the interface insinuates itself into the human psyche. We are redefining and creating what it means to be human in this new physical/virtual integrated reality - we are not just designing user interfaces, we are designing users.
{"title":"Designing the user in user interfaces","authors":"M. Bolas","doi":"10.1145/2659766.2642919","DOIUrl":"https://doi.org/10.1145/2659766.2642919","url":null,"abstract":"In the good old days, the human was here, the computer there, and a good living was to be made by designing ways to interface between the two. Now we find ourselves unthinkingly pinching to zoom in on a picture in a paper magazine. User interfaces are changing instinctual human behavior and instinctual human behavior is changing user interfaces. We point or look left in the \"virtual\" world just as we point or look left in the physical. It is clear that nothing is clear anymore: the need for \"interface\" vanishes when the boundaries between the physical and the virtual disappear. We are at a watershed moment when to experience being human means to experience being machine. When there is not a user interface - it is just what you do. When instinct supplants mice and menus and the interface insinuates itself into the human psyche. We are redefining and creating what it means to be human in this new physical/virtual integrated reality - we are not just designing user interfaces, we are designing users.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126114710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is now possible to develop head-mounted devices (HMDs) that allow for ego-centric sensing of mid-air gestural input. Therefore, we explore the use of HMD-based gestural input techniques in smart space environments. We developed a usage scenario to evaluate HMD-based gestural interactions and conducted a user study to elicit qualitative feedback on several HMD-based gestural input techniques. Our results show that for the proposed scenario, mid-air hand gestures are preferred to head gestures for input and rated more favorably compared to non-gestural input techniques available on existing HMDs. Informed by these study results, we developed a prototype HMD system that supports gestural interactions as proposed in our scenario. We conducted a second user study to quantitatively evaluate our prototype comparing several gestural and non-gestural input techniques. The results of this study show no clear advantage or disadvantage of gestural inputs vs.~non-gestural input techniques on HMDs. We did find that voice control as (sole) input modality performed worst compared to the other input techniques we evaluated. Lastly, we present two further applications implemented with our system, demonstrating 3D scene viewing and ambient light control. We conclude by briefly discussing the implications of ego-centric vs. exo-centric tracking for interaction in smart spaces.
{"title":"Exploring gestural interaction in smart spaces using head mounted devices with ego-centric sensing","authors":"Barry Kollee, Sven G. Kratz, Anthony Dunnigan","doi":"10.1145/2659766.2659781","DOIUrl":"https://doi.org/10.1145/2659766.2659781","url":null,"abstract":"It is now possible to develop head-mounted devices (HMDs) that allow for ego-centric sensing of mid-air gestural input. Therefore, we explore the use of HMD-based gestural input techniques in smart space environments. We developed a usage scenario to evaluate HMD-based gestural interactions and conducted a user study to elicit qualitative feedback on several HMD-based gestural input techniques. Our results show that for the proposed scenario, mid-air hand gestures are preferred to head gestures for input and rated more favorably compared to non-gestural input techniques available on existing HMDs. Informed by these study results, we developed a prototype HMD system that supports gestural interactions as proposed in our scenario. We conducted a second user study to quantitatively evaluate our prototype comparing several gestural and non-gestural input techniques. The results of this study show no clear advantage or disadvantage of gestural inputs vs.~non-gestural input techniques on HMDs. We did find that voice control as (sole) input modality performed worst compared to the other input techniques we evaluated. Lastly, we present two further applications implemented with our system, demonstrating 3D scene viewing and ambient light control. We conclude by briefly discussing the implications of ego-centric vs. exo-centric tracking for interaction in smart spaces.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125580362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Building a real-world immersive 3D modeling application is hard. In spite of the many supposed advantages of working in the virtual world, users quickly tire of waving their arms about and the resulting models remain simplistic at best. The dream of creation at the speed of thought has largely remained unfulfilled due to numerous factors such as the lack of suitable menu and system controls, inability to perform precise manipulations, lack of numeric input, challenges with ergonomics, and difficulties with maintaining user focus and preserving immersion. The focus of our research is on the building of virtual world applications that can go beyond the demo and can be used to do real-world work. The goal is to develop interaction techniques that support the richness and complexity required to build complex 3D models, yet minimize expenditure of user energy and maximize user comfort. We present an approach that combines the natural and intuitive power of VR interaction, the precision and control of 2D touch surfaces, and the richness of a commercial modeling package. We also discuss the benefits of collocating 2D touch with 3D bimanual spatial input, the challenges in designing a custom controller targeted at achieving the same, and the new avenues that this collocation creates.
{"title":"Making VR work: building a real-world immersive modeling application in the virtual world","authors":"M. Mine, A. Yoganandan, D. Coffey","doi":"10.1145/2659766.2659780","DOIUrl":"https://doi.org/10.1145/2659766.2659780","url":null,"abstract":"Building a real-world immersive 3D modeling application is hard. In spite of the many supposed advantages of working in the virtual world, users quickly tire of waving their arms about and the resulting models remain simplistic at best. The dream of creation at the speed of thought has largely remained unfulfilled due to numerous factors such as the lack of suitable menu and system controls, inability to perform precise manipulations, lack of numeric input, challenges with ergonomics, and difficulties with maintaining user focus and preserving immersion. The focus of our research is on the building of virtual world applications that can go beyond the demo and can be used to do real-world work. The goal is to develop interaction techniques that support the richness and complexity required to build complex 3D models, yet minimize expenditure of user energy and maximize user comfort. We present an approach that combines the natural and intuitive power of VR interaction, the precision and control of 2D touch surfaces, and the richness of a commercial modeling package. We also discuss the benefits of collocating 2D touch with 3D bimanual spatial input, the challenges in designing a custom controller targeted at achieving the same, and the new avenues that this collocation creates.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126781924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We measured the operation times of two tasks using video a transparent video head mounted display (HMD) in first and third person views.
我们使用透明头戴式显示器(HMD)在第一人称和第三人称视角下测量了两个任务的操作时间。
{"title":"Measurements of operating time in first and third person views using video see-through HMD","authors":"T. Koike","doi":"10.1145/2659766.2661204","DOIUrl":"https://doi.org/10.1145/2659766.2661204","url":null,"abstract":"We measured the operation times of two tasks using video a transparent video head mounted display (HMD) in first and third person views.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126209410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fuyang Huang, Zelong Sun, Q. Xu, F. Sze, Tang Wai Lan, Xiaogang Wang
We propose a novel spatial-temporal feature set for sign language recognition, wherein we construct explicit spatial and temporal features that capture both hand movement and hand shape. Experimental results show that the proposed solution outperforms existing one in terms of accuracy.
{"title":"Real-time sign language recognition using RGBD stream: spatial-temporal feature exploration","authors":"Fuyang Huang, Zelong Sun, Q. Xu, F. Sze, Tang Wai Lan, Xiaogang Wang","doi":"10.1145/2659766.2661214","DOIUrl":"https://doi.org/10.1145/2659766.2661214","url":null,"abstract":"We propose a novel spatial-temporal feature set for sign language recognition, wherein we construct explicit spatial and temporal features that capture both hand movement and hand shape. Experimental results show that the proposed solution outperforms existing one in terms of accuracy.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121788488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce Reality-based User Interface System (RUIS), a virtual reality (VR) toolkit aimed for students and hobbyists, which we have used in an annually organized VR course for the past four years. RUIS toolkit provides 3D user interface building blocks for creating immersive VR applications with spatial interaction and stereo 3D graphics, while supporting affordable VR peripherals like Kinect, PlayStation Move, Razer Hydra, and Oculus Rift. We describe a novel spatial interaction scheme that combines freeform, full-body interaction with traditional video game locomotion, which can be easily implemented with RUIS. We also discuss the specific challenges associated with developing VR applications, and how they relate to the design principles behind RUIS. Finally, we validate our toolkit by comparing development difficulties experienced by users of different software toolkits, and by presenting several VR applications created with RUIS, demonstrating a variety of spatial user interfaces that it can produce.
我们介绍了基于现实的用户界面系统(RUIS),这是一个针对学生和业余爱好者的虚拟现实(VR)工具包,我们在过去四年的年度组织VR课程中使用了它。RUIS工具包提供3D用户界面构建块,用于创建具有空间交互和立体3D图形的沉浸式VR应用程序,同时支持价格合理的VR外设,如Kinect, PlayStation Move, Razer Hydra和Oculus Rift。我们描述了一种新的空间交互方案,该方案将自由形式的全身交互与传统的视频游戏运动相结合,可以很容易地通过RUIS实现。我们还讨论了与开发VR应用程序相关的具体挑战,以及它们如何与RUIS背后的设计原则相关联。最后,我们通过比较不同软件工具包的用户所经历的开发困难来验证我们的工具包,并通过展示用RUIS创建的几个VR应用程序,展示它可以产生的各种空间用户界面。
{"title":"RUIS: a toolkit for developing virtual reality applications with spatial interaction","authors":"Tuukka M. Takala","doi":"10.1145/2659766.2659774","DOIUrl":"https://doi.org/10.1145/2659766.2659774","url":null,"abstract":"We introduce Reality-based User Interface System (RUIS), a virtual reality (VR) toolkit aimed for students and hobbyists, which we have used in an annually organized VR course for the past four years. RUIS toolkit provides 3D user interface building blocks for creating immersive VR applications with spatial interaction and stereo 3D graphics, while supporting affordable VR peripherals like Kinect, PlayStation Move, Razer Hydra, and Oculus Rift. We describe a novel spatial interaction scheme that combines freeform, full-body interaction with traditional video game locomotion, which can be easily implemented with RUIS. We also discuss the specific challenges associated with developing VR applications, and how they relate to the design principles behind RUIS. Finally, we validate our toolkit by comparing development difficulties experienced by users of different software toolkits, and by presenting several VR applications created with RUIS, demonstrating a variety of spatial user interfaces that it can produce.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129974582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. K. Adhikarla, Paweł W. Woźniak, Robert J. Teather
We present HoloLeap, which uses a Leap Motion controller for 3D model manipulation on a light field display (LFD). Like autostereo displays, LFDs support glasses-free 3D viewing. Unlike autostereo displays, LFDs automatically accommodate multiple viewpoints without the need of additional tracking equipment. We describe a gesture-based object manipulation that enables manipulation of 3D objects with 7DOFs by leveraging natural and familiar gestures. We provide an overview of research questions aimed at optimizing gestural input on light field displays.
{"title":"HoloLeap: towards efficient 3D object manipulation on light field displays","authors":"V. K. Adhikarla, Paweł W. Woźniak, Robert J. Teather","doi":"10.1145/2659766.2661223","DOIUrl":"https://doi.org/10.1145/2659766.2661223","url":null,"abstract":"We present HoloLeap, which uses a Leap Motion controller for 3D model manipulation on a light field display (LFD). Like autostereo displays, LFDs support glasses-free 3D viewing. Unlike autostereo displays, LFDs automatically accommodate multiple viewpoints without the need of additional tracking equipment. We describe a gesture-based object manipulation that enables manipulation of 3D objects with 7DOFs by leveraging natural and familiar gestures. We provide an overview of research questions aimed at optimizing gestural input on light field displays.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124559863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Large format displays are commonplace for viewing large scientific datasets. These displays often find their way into collaborative spaces, allowing for multiple individuals to be collocated with the display, though multi-modal interaction with the displayed content remains a challenge. We have begun development of a tablet-based interaction mode for use with large format displays to augment these workspaces.
{"title":"Augmenting views on large format displays with tablets","authors":"Phil Lindner, Adolfo Rodriguez, T. Uram, M. Papka","doi":"10.1145/2659766.2661227","DOIUrl":"https://doi.org/10.1145/2659766.2661227","url":null,"abstract":"Large format displays are commonplace for viewing large scientific datasets. These displays often find their way into collaborative spaces, allowing for multiple individuals to be collocated with the display, though multi-modal interaction with the displayed content remains a challenge. We have begun development of a tablet-based interaction mode for use with large format displays to augment these workspaces.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114836935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We investigated mouse-based 3D selection using one-eyed cursors, evaluating stereo and head-tracking. Stereo cursors significantly reduced performance for targets at different depths, but the one-eyed cursor yielded some discomfort.
{"title":"Depth cues and mouse-based 3D target selection","authors":"Robert J. Teather, W. Stuerzlinger","doi":"10.1145/2659766.2661221","DOIUrl":"https://doi.org/10.1145/2659766.2661221","url":null,"abstract":"We investigated mouse-based 3D selection using one-eyed cursors, evaluating stereo and head-tracking. Stereo cursors significantly reduced performance for targets at different depths, but the one-eyed cursor yielded some discomfort.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116897570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Physical visualizations are an emergent area of research and appear in increasingly diverse forms. While they provide an engaging way of data exploration, they are often limited by a fixed representation and lack interactivity. In this work we discuss our early approaches and experiences in combining physical visualizations with spatial augmented reality and present an initial prototype.
{"title":"Projection augmented physical visualizations","authors":"Simon Stusak, M. Teufel","doi":"10.1145/2659766.2661210","DOIUrl":"https://doi.org/10.1145/2659766.2661210","url":null,"abstract":"Physical visualizations are an emergent area of research and appear in increasingly diverse forms. While they provide an engaging way of data exploration, they are often limited by a fixed representation and lack interactivity. In this work we discuss our early approaches and experiences in combining physical visualizations with spatial augmented reality and present an initial prototype.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129793741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}