Designers have yet to find a fully general and effective solution to solve the problem of walking in large or unlimited virtual environments. A detailed taxonomy of walking-based locomotion techniques would be beneficial to better understand, analyze, and design walking techniques for virtual reality (VR). We present a taxonomy that can help designers and researchers investigate the fundamental components of locomotion techniques. Researchers can create novel locomotion techniques by making choices from the components of this taxonomy, analyze and improve existing techniques, or perform experiments to evaluate locomotion techniques in detail using the organization we present.
{"title":"A Taxonomy for Designing Walking-based Locomotion Techniques for Virtual Reality","authors":"Mahdi Nabiyouni, D. Bowman","doi":"10.1145/3009939.3010076","DOIUrl":"https://doi.org/10.1145/3009939.3010076","url":null,"abstract":"Designers have yet to find a fully general and effective solution to solve the problem of walking in large or unlimited virtual environments. A detailed taxonomy of walking-based locomotion techniques would be beneficial to better understand, analyze, and design walking techniques for virtual reality (VR). We present a taxonomy that can help designers and researchers investigate the fundamental components of locomotion techniques. Researchers can create novel locomotion techniques by making choices from the components of this taxonomy, analyze and improve existing techniques, or perform experiments to evaluate locomotion techniques in detail using the organization we present.","PeriodicalId":422627,"journal":{"name":"Proceedings of the 2016 ACM Companion on Interactive Surfaces and Spaces","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134210832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present work-in-progress toward a vision of personalized views of visual analytics interfaces in the context of collaborative analytics in immersive spaces. In particular, we are interested in the sense of immersion, responsiveness, and personalization afforded by gaze-based input. Through combining large screen visual analytics tools with eye-tracking, a collaborative visual analytics system can become egocentric while not disrupting the collaborative nature of the experience. We present a prototype system and several ideas for real-time personalization of views in visual analytics.
{"title":"Personalized Views for Immersive Analytics","authors":"Santiago Bonada, Rafael Veras, C. Collins","doi":"10.1145/3009939.3009953","DOIUrl":"https://doi.org/10.1145/3009939.3009953","url":null,"abstract":"In this paper we present work-in-progress toward a vision of personalized views of visual analytics interfaces in the context of collaborative analytics in immersive spaces. In particular, we are interested in the sense of immersion, responsiveness, and personalization afforded by gaze-based input. Through combining large screen visual analytics tools with eye-tracking, a collaborative visual analytics system can become egocentric while not disrupting the collaborative nature of the experience. We present a prototype system and several ideas for real-time personalization of views in visual analytics.","PeriodicalId":422627,"journal":{"name":"Proceedings of the 2016 ACM Companion on Interactive Surfaces and Spaces","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116810034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we propose Mixed Reality (MR) interfaces as tools for the analysis and exploration of health-related data. Reported findings originate from the research project "SMARTACT" in which several intervention studies are conducted to investigate how participants' long-term health behavior can be improved. We conducted a focus group to identify limitations of current data analysis technologies and practices, possible uses of MR interfaces and associated open questions to leverage their potentials in the given domain.
{"title":"Immersive Analysis of Health-Related Data with Mixed Reality Interfaces: Potentials and Open Question","authors":"Jens Müller, Simon Butscher, Harald Reiterer","doi":"10.1145/3009939.3009951","DOIUrl":"https://doi.org/10.1145/3009939.3009951","url":null,"abstract":"In this paper we propose Mixed Reality (MR) interfaces as tools for the analysis and exploration of health-related data. Reported findings originate from the research project \"SMARTACT\" in which several intervention studies are conducted to investigate how participants' long-term health behavior can be improved. We conducted a focus group to identify limitations of current data analysis technologies and practices, possible uses of MR interfaces and associated open questions to leverage their potentials in the given domain.","PeriodicalId":422627,"journal":{"name":"Proceedings of the 2016 ACM Companion on Interactive Surfaces and Spaces","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116926006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rapid command selection is a priority for the user interface designers. Spatial memory is an effective way to improve the selection performance, since it allows users to make quick selection decisions from the memory rather than relying on the slow visual search. Spatial learning in the real world leverages landmarks available in the environment, but user interfaces often lack in visual landmarks. As a result, when the number of commands increases, remembering locations and efficient command selection become difficult. To improve the efficiency of memory-based user interfaces, I am investigating the use of landmarks in different spatially-stable interfaces. With a series of exemplar interfaces, I am trying to exploit the interface and human factors related to users' spatial knowledge. I along with my colleagues developed a new memory-based technique -- HandMark menu that uses the hands and fingers as landmarks, and helps users to remember commands placed between and around them. My current research investigates the use of both natural and artificial landmarks in spatial interfaces, and their effects in rapid spatial memory development.
{"title":"Use of Landmarks to Design Large and Efficient Command Interfaces","authors":"Md. Sami Uddin","doi":"10.1145/3009939.3009942","DOIUrl":"https://doi.org/10.1145/3009939.3009942","url":null,"abstract":"Rapid command selection is a priority for the user interface designers. Spatial memory is an effective way to improve the selection performance, since it allows users to make quick selection decisions from the memory rather than relying on the slow visual search. Spatial learning in the real world leverages landmarks available in the environment, but user interfaces often lack in visual landmarks. As a result, when the number of commands increases, remembering locations and efficient command selection become difficult. To improve the efficiency of memory-based user interfaces, I am investigating the use of landmarks in different spatially-stable interfaces. With a series of exemplar interfaces, I am trying to exploit the interface and human factors related to users' spatial knowledge. I along with my colleagues developed a new memory-based technique -- HandMark menu that uses the hands and fingers as landmarks, and helps users to remember commands placed between and around them. My current research investigates the use of both natural and artificial landmarks in spatial interfaces, and their effects in rapid spatial memory development.","PeriodicalId":422627,"journal":{"name":"Proceedings of the 2016 ACM Companion on Interactive Surfaces and Spaces","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120939976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Immersion is the subjective impression of being deeply involved in a specific situation, and can be sensory or cognitive. In this position paper, we use a basic model of visual perception to study how ultra-high resolution wall displays can provide visual immersion. With their large size, depending on the position of viewers in front of them, wall displays can provide a surrounding and vivid environment. Users close to the wall can have their visual field filled by the wall and they are able to see clearly a large amount information with a fine resolution. However, when close to the wall, visual distortion due to large possible viewing angles, can affect the viewing of data. On the contrary, from far away, distortion is no longer an issue, but the viewers' visual field is not fully contained inside the wall, and the information details seen are less fine.
{"title":"Visual Immersion in the Context of Wall Displays","authors":"Arnaud Prouzeau, A. Bezerianos, O. Chapuis","doi":"10.1145/3009939.3009945","DOIUrl":"https://doi.org/10.1145/3009939.3009945","url":null,"abstract":"Immersion is the subjective impression of being deeply involved in a specific situation, and can be sensory or cognitive. In this position paper, we use a basic model of visual perception to study how ultra-high resolution wall displays can provide visual immersion. With their large size, depending on the position of viewers in front of them, wall displays can provide a surrounding and vivid environment. Users close to the wall can have their visual field filled by the wall and they are able to see clearly a large amount information with a fine resolution. However, when close to the wall, visual distortion due to large possible viewing angles, can affect the viewing of data. On the contrary, from far away, distortion is no longer an issue, but the viewers' visual field is not fully contained inside the wall, and the information details seen are less fine.","PeriodicalId":422627,"journal":{"name":"Proceedings of the 2016 ACM Companion on Interactive Surfaces and Spaces","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126882164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we review the activities of Air Traffic Control and Management (ATC/M) and expose scenarios that illustrate current and future challenges in this domain. In particular we look at those challenges that can be tackled with the use of immersion. We introduce the concepts of an immersive Remote Tower and Collaborative Immersive Trajectory analysis. These make use of immersive technologies such as Head Mounted Displays (HMDs) or large, tiled displays to immerse users in their tasks, better supporting the management and analysis of the complex data produced in this domain.
{"title":"Immersive solutions for future Air Traffic Control and Management","authors":"Maxime Cordeil, Tim Dwyer, C. Hurter","doi":"10.1145/3009939.3009944","DOIUrl":"https://doi.org/10.1145/3009939.3009944","url":null,"abstract":"In this paper we review the activities of Air Traffic Control and Management (ATC/M) and expose scenarios that illustrate current and future challenges in this domain. In particular we look at those challenges that can be tackled with the use of immersion. We introduce the concepts of an immersive Remote Tower and Collaborative Immersive Trajectory analysis. These make use of immersive technologies such as Head Mounted Displays (HMDs) or large, tiled displays to immerse users in their tasks, better supporting the management and analysis of the complex data produced in this domain.","PeriodicalId":422627,"journal":{"name":"Proceedings of the 2016 ACM Companion on Interactive Surfaces and Spaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115292059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Synchronous cooperative work of multiple collaborators in large, high-resolution display systems comprises such psychological phenomena like workspace awareness and human territoriality. The phenomena and interplay between them can cause a significant impact on human-human and human-environment interaction. In a non-digital environment humans rely on their own physical abilities, utilities, and social protocols to control those phenomena (e.g. close eyes, or use earplugs to reduce workspace awareness; rotate oneself towards collaborators to increase workspace awareness). Digital environments, on the other hand, provide us with a possibility to ease, automate, and unify control processes, thus taking off that burden from users. Yet, we have to understand first, what effects workspace awareness and territoriality have within a collaborative environment. The aim of this doctoral thesis is to investigate effects of workspace awareness and territoriality on users and interaction processes in mixed-focus scenarios of various collaborative settings.
{"title":"Effects of Workspace Awareness and Territoriality in Environments with Large, Shared Displays","authors":"A. Sigitov","doi":"10.1145/3009939.3009940","DOIUrl":"https://doi.org/10.1145/3009939.3009940","url":null,"abstract":"Synchronous cooperative work of multiple collaborators in large, high-resolution display systems comprises such psychological phenomena like workspace awareness and human territoriality. The phenomena and interplay between them can cause a significant impact on human-human and human-environment interaction. In a non-digital environment humans rely on their own physical abilities, utilities, and social protocols to control those phenomena (e.g. close eyes, or use earplugs to reduce workspace awareness; rotate oneself towards collaborators to increase workspace awareness). Digital environments, on the other hand, provide us with a possibility to ease, automate, and unify control processes, thus taking off that burden from users. Yet, we have to understand first, what effects workspace awareness and territoriality have within a collaborative environment. The aim of this doctoral thesis is to investigate effects of workspace awareness and territoriality on users and interaction processes in mixed-focus scenarios of various collaborative settings.","PeriodicalId":422627,"journal":{"name":"Proceedings of the 2016 ACM Companion on Interactive Surfaces and Spaces","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128517512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dianna Yim, Garance Nicole Loison, F. H. Fard, Edwin Chan, Alec McAllister, F. Maurer
This paper describes a framework using the Microsoft Kinect 2 and the Microsoft HoloLens that can assist users in analyzing complex datasets. The system allows for groups of people to view a topological map as a virtual hologram in order to assist them in understanding complex datasets. In addition, the gestures that are built into the system were created with the idea of usability in mind. By allowing the user to resize, rotate and reposition the map, it opens up a much wider range of understanding the data that they have received. Custom gestures are also possible depending on the situation, such as raising or lowering the water level in a potential flood hot spot, or viewing graphs and charts associated with a specific data point.
{"title":"Gesture-driven Interactions on a Virtual Hologram in Mixed Reality","authors":"Dianna Yim, Garance Nicole Loison, F. H. Fard, Edwin Chan, Alec McAllister, F. Maurer","doi":"10.1145/3009939.3009948","DOIUrl":"https://doi.org/10.1145/3009939.3009948","url":null,"abstract":"This paper describes a framework using the Microsoft Kinect 2 and the Microsoft HoloLens that can assist users in analyzing complex datasets. The system allows for groups of people to view a topological map as a virtual hologram in order to assist them in understanding complex datasets. In addition, the gestures that are built into the system were created with the idea of usability in mind. By allowing the user to resize, rotate and reposition the map, it opens up a much wider range of understanding the data that they have received. Custom gestures are also possible depending on the situation, such as raising or lowering the water level in a potential flood hot spot, or viewing graphs and charts associated with a specific data point.","PeriodicalId":422627,"journal":{"name":"Proceedings of the 2016 ACM Companion on Interactive Surfaces and Spaces","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127102224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}