Despite 3D TVs and applications gaining popularity in recent years, 3D displays on mobile devices are rare. With low-cost head tracking solutions and first user interfaces available on smartphones, the question arises how effective the 3D impression through motion-parallax is and whether it is possible to achieve viable depth perception without binocular stereo cues. As motion parallax and stereopsis may be considered the most important depth cues, we developed an experiment comparing the user's depth perception utilizing head tracking with and without stereopsis.
{"title":"The significance of stereopsis and motion parallax in mobile head tracking environments","authors":"Paul Lubos, Dimitar Valkov","doi":"10.1145/2659766.2661220","DOIUrl":"https://doi.org/10.1145/2659766.2661220","url":null,"abstract":"Despite 3D TVs and applications gaining popularity in recent years, 3D displays on mobile devices are rare. With low-cost head tracking solutions and first user interfaces available on smartphones, the question arises how effective the 3D impression through motion-parallax is and whether it is possible to achieve viable depth perception without binocular stereo cues. As motion parallax and stereopsis may be considered the most important depth cues, we developed an experiment comparing the user's depth perception utilizing head tracking with and without stereopsis.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128889082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Walking is usually considered the most natural form for self-motion in a virtual environment (VE). However, the confined physical workspace of typical virtual reality (VR) labs often prevents natural exploration of larger VEs. Redirected walking has been introduced as a potential solution to this restriction, but corresponding techniques often induce enormous manipulations if the workspace is considerably small and lacks natural experiences therefore. In this poster we propose the Safe-&-Round user interface, which supports natural walking in a potentially infinite virtual scene while confined to a considerably restricted physical workspace. This virtual locomotion technique relies on a safety volume, which is displayed as a semi-transparent half-capsule, inside which the user can walk without manipulations caused by redirected walking.
{"title":"Safe-&-round: bringing redirected walking to small virtual reality laboratories","authors":"Paul Lubos, G. Bruder, Frank Steinicke","doi":"10.1145/2659766.2661219","DOIUrl":"https://doi.org/10.1145/2659766.2661219","url":null,"abstract":"Walking is usually considered the most natural form for self-motion in a virtual environment (VE). However, the confined physical workspace of typical virtual reality (VR) labs often prevents natural exploration of larger VEs. Redirected walking has been introduced as a potential solution to this restriction, but corresponding techniques often induce enormous manipulations if the workspace is considerably small and lacks natural experiences therefore. In this poster we propose the Safe-&-Round user interface, which supports natural walking in a potentially infinite virtual scene while confined to a considerably restricted physical workspace. This virtual locomotion technique relies on a safety volume, which is displayed as a semi-transparent half-capsule, inside which the user can walk without manipulations caused by redirected walking.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133521285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present the Void Shadows interaction - a novel stereoscopic 3D interaction paradigm in which each virtual object casts a shadow on a touch-enabled display surface. The user can conveniently interact with such a shadow, and her actions are transferred to the associated object. Since all interactive tasks are carried out on the zero-parallax plane, there are no accommodation-convergence or related 2D/3D interaction problems, while the user is still able to "directly" manipulate objects at different 3D positions, without first having to position a cursor and to select an object. In an initial user study we have proved the applicability of the metaphor for some common tasks, and we have found that compared to in-air 3D interaction techniques the users performed up to 28% more precisely using about the same amount of time.
{"title":"Void shadows: multi-touch interaction with stereoscopic objects on the tabletop","authors":"A. Giesler, Dimitar Valkov, K. Hinrichs","doi":"10.1145/2659766.2659779","DOIUrl":"https://doi.org/10.1145/2659766.2659779","url":null,"abstract":"In this paper we present the Void Shadows interaction - a novel stereoscopic 3D interaction paradigm in which each virtual object casts a shadow on a touch-enabled display surface. The user can conveniently interact with such a shadow, and her actions are transferred to the associated object. Since all interactive tasks are carried out on the zero-parallax plane, there are no accommodation-convergence or related 2D/3D interaction problems, while the user is still able to \"directly\" manipulate objects at different 3D positions, without first having to position a cursor and to select an object. In an initial user study we have proved the applicability of the metaphor for some common tasks, and we have found that compared to in-air 3D interaction techniques the users performed up to 28% more precisely using about the same amount of time.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131356443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Barrett Ens, Juan David Hincapié-Ramos, Pourang Irani
Information spaces are virtual workspaces that help us manage information by mapping it to the physical environment. This widely influential concept has been interpreted in a variety of forms, often in conjunction with mixed reality. We present Ethereal Planes, a design framework that ties together many existing variations of 2D information spaces. Ethereal Planes is aimed at assisting the design of user interfaces for next-generation technologies such as head-worn displays. From an extensive literature review, we encapsulated the common attributes of existing novel designs in seven design dimensions. Mapping the reviewed designs to the framework dimensions reveals a set of common usage patterns. We discuss how the Ethereal Planes framework can be methodically applied to help inspire new designs. We provide a concrete example of the framework's utility during the design of the Personal Cockpit, a window management system for head-worn displays.
{"title":"Ethereal planes: a design framework for 2D information space in 3D mixed reality environments","authors":"Barrett Ens, Juan David Hincapié-Ramos, Pourang Irani","doi":"10.1145/2659766.2659769","DOIUrl":"https://doi.org/10.1145/2659766.2659769","url":null,"abstract":"Information spaces are virtual workspaces that help us manage information by mapping it to the physical environment. This widely influential concept has been interpreted in a variety of forms, often in conjunction with mixed reality. We present Ethereal Planes, a design framework that ties together many existing variations of 2D information spaces. Ethereal Planes is aimed at assisting the design of user interfaces for next-generation technologies such as head-worn displays. From an extensive literature review, we encapsulated the common attributes of existing novel designs in seven design dimensions. Mapping the reviewed designs to the framework dimensions reveals a set of common usage patterns. We discuss how the Ethereal Planes framework can be methodically applied to help inspire new designs. We provide a concrete example of the framework's utility during the design of the Personal Cockpit, a window management system for head-worn displays.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114082824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The design of spatial user interaction for immersive virtual environments (IVEs) is an inherently difficult task. Missing haptic feedback and spatial misperception hinder an efficient direct interaction with virtual objects. Moreover, interaction performance depends on a variety of ergonomics factors, such as the user's endurance, muscular strength, as well as fitness. However, the potential benefits of direct and natural interaction offered by IVEs encourage research to create more efficient interaction methods. We suggest a novel way of 3D interaction by utilizing the fact that for many tasks, bimanual interaction shows benefits over one-handed interaction in a confined interaction space. In this paper we push this idea even further and introduce quadmanual user interfaces (QUIs) with two additional, virtual hands. These magic hands allow the user to keep their arms in a comfortable position yet still interact with multiple virtual interaction spaces. To analyze our approach we conducted a performance experiment inspired by a Fitts' Law selection task, investigating the feasibility of our approach for the natural interaction with 3D objects in virtual space.
{"title":"Are 4 hands better than 2?: bimanual interaction for quadmanual user interfaces","authors":"Paul Lubos, G. Bruder, Frank Steinicke","doi":"10.1145/2659766.2659782","DOIUrl":"https://doi.org/10.1145/2659766.2659782","url":null,"abstract":"The design of spatial user interaction for immersive virtual environments (IVEs) is an inherently difficult task. Missing haptic feedback and spatial misperception hinder an efficient direct interaction with virtual objects. Moreover, interaction performance depends on a variety of ergonomics factors, such as the user's endurance, muscular strength, as well as fitness. However, the potential benefits of direct and natural interaction offered by IVEs encourage research to create more efficient interaction methods. We suggest a novel way of 3D interaction by utilizing the fact that for many tasks, bimanual interaction shows benefits over one-handed interaction in a confined interaction space. In this paper we push this idea even further and introduce quadmanual user interfaces (QUIs) with two additional, virtual hands. These magic hands allow the user to keep their arms in a comfortable position yet still interact with multiple virtual interaction spaces. To analyze our approach we conducted a performance experiment inspired by a Fitts' Law selection task, investigating the feasibility of our approach for the natural interaction with 3D objects in virtual space.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"274 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133831885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chih-Fan Chen, Ryan P. Spicer, Rhys Yahata, M. Bolas, Evan A. Suma
Depth-based gesture cameras provide a promising and novel way to interface with computers. Nevertheless, this type of interaction remains challenging due to the complexity of finger interactions and the under large viewpoint variations. Existing middleware such as Intel Perceptual Computing SDK (PCSDK) or SoftKinetic IISU can provide abundant hand tracking and gesture information. However, the data is too noisy (Fig. 1, left) for consistent and reliable use in our application. In this work, we present a filtering approach that combines several features from PCSDK to achieve more stable hand openness and supports grasping interactions in virtual environments. Support vector machine (SVM), a machine learning method, is used to achieve better accuracy in a single frame, and Markov Random Field (MRF), a probability theory, is used to stabilize and smooth the sequential output. Our experimental results verify the effectiveness and the robustness of our method.
{"title":"Real-time and robust grasping detection","authors":"Chih-Fan Chen, Ryan P. Spicer, Rhys Yahata, M. Bolas, Evan A. Suma","doi":"10.1145/2659766.2661224","DOIUrl":"https://doi.org/10.1145/2659766.2661224","url":null,"abstract":"Depth-based gesture cameras provide a promising and novel way to interface with computers. Nevertheless, this type of interaction remains challenging due to the complexity of finger interactions and the under large viewpoint variations. Existing middleware such as Intel Perceptual Computing SDK (PCSDK) or SoftKinetic IISU can provide abundant hand tracking and gesture information. However, the data is too noisy (Fig. 1, left) for consistent and reliable use in our application. In this work, we present a filtering approach that combines several features from PCSDK to achieve more stable hand openness and supports grasping interactions in virtual environments. Support vector machine (SVM), a machine learning method, is used to achieve better accuracy in a single frame, and Markov Random Field (MRF), a probability theory, is used to stabilize and smooth the sequential output. Our experimental results verify the effectiveness and the robustness of our method.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121982003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a gesture-based 3D modeling system, which allows the user to create and sculpt a 3D model with hand-gestures. The goal of our system is to provide a more intuitive 3D user interface than the traditional 2D ones such as mouse or touch pad. Inspired by how people make paper clay, a series of hand gestures are designed for interacting with the 3D object and their corresponding mesh processing functions are developed. Thus, the user can create a desired virtual 3D object just like paper clay making.
{"title":"Augmented reality paper clay making based on hand gesture recognition","authors":"P. Chiang, Wei-Yu Li","doi":"10.1145/2659766.2661209","DOIUrl":"https://doi.org/10.1145/2659766.2661209","url":null,"abstract":"We propose a gesture-based 3D modeling system, which allows the user to create and sculpt a 3D model with hand-gestures. The goal of our system is to provide a more intuitive 3D user interface than the traditional 2D ones such as mouse or touch pad. Inspired by how people make paper clay, a series of hand gestures are designed for interacting with the 3D object and their corresponding mesh processing functions are developed. Thus, the user can create a desired virtual 3D object just like paper clay making.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122385364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Contactless motion sensing devices enable a new form of input that does not encumber the user with wearable tracking equipment. We present a novel travel technique using the Leap Motion finger tracker which adopts a 2DOF steering metaphor used in traditional mouse and keyboard navigation in many 3D computer games.
{"title":"LeapLook: a free-hand gestural travel technique using the leap motion finger tracker","authors":"Robert Codd-Downey, W. Stuerzlinger","doi":"10.1145/2659766.2661218","DOIUrl":"https://doi.org/10.1145/2659766.2661218","url":null,"abstract":"Contactless motion sensing devices enable a new form of input that does not encumber the user with wearable tracking equipment. We present a novel travel technique using the Leap Motion finger tracker which adopts a 2DOF steering metaphor used in traditional mouse and keyboard navigation in many 3D computer games.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124858667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Austin S. Lee, H. Chigira, S. Tang, Kojo Acquah, H. Ishii
We introduce AnnoScape, a remote collaboration system that allows users to overlay live video of the physical desktop image on a shared 3D virtual workspace to support individual and collaborative review of 2D and 3D content using hand gestures and real ink. The AnnoScape system enables distributed users to visually navigate the shared 3D virtual workspace individually or jointly by moving tangible handles; simultaneously snap into a shared viewpoint and generate a live video overlay of freehand annotations from the desktop surface onto the system's virtual viewports which can be placed spatially in the 3D data space. Finally, we present results of our preliminary user study and discuss design issues and AnnoScape's potential to facilitate effective communication during remote 3D data reviews.
{"title":"AnnoScape: remote collaborative review using live video overlay in shared 3D virtual workspace","authors":"Austin S. Lee, H. Chigira, S. Tang, Kojo Acquah, H. Ishii","doi":"10.1145/2659766.2659776","DOIUrl":"https://doi.org/10.1145/2659766.2659776","url":null,"abstract":"We introduce AnnoScape, a remote collaboration system that allows users to overlay live video of the physical desktop image on a shared 3D virtual workspace to support individual and collaborative review of 2D and 3D content using hand gestures and real ink. The AnnoScape system enables distributed users to visually navigate the shared 3D virtual workspace individually or jointly by moving tangible handles; simultaneously snap into a shared viewpoint and generate a live video overlay of freehand annotations from the desktop surface onto the system's virtual viewports which can be placed spatially in the 3D data space. Finally, we present results of our preliminary user study and discuss design issues and AnnoScape's potential to facilitate effective communication during remote 3D data reviews.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130234418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present the development of the interactive system integrating multi-touch tabletop and projection-based Augmented Reality (AR). The integrated system supports the flexible presentation of multiple UI components, which is suitable for multi-touch tabletop environments displaying complex information at different layers.
{"title":"Hidden UI: projection-based augmented reality for map navigation on multi-touch tabletop","authors":"Seungjae Oh, Heeseung Kwon, H. So","doi":"10.1145/2659766.2661228","DOIUrl":"https://doi.org/10.1145/2659766.2661228","url":null,"abstract":"We present the development of the interactive system integrating multi-touch tabletop and projection-based Augmented Reality (AR). The integrated system supports the flexible presentation of multiple UI components, which is suitable for multi-touch tabletop environments displaying complex information at different layers.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122962857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}