Pub Date : 2016-03-19DOI: 10.1109/3DUI.2016.7460045
A. Bönsch, B. Weyers, J. Wendt, Sebastian Freitag, T. Kuhlen
Computer-controlled, human-like virtual agents (VAs), are often embedded into immersive virtual environments (IVEs) in order to enliven a scene or to assist users. Certain constraints need to be fulfilled, e.g., a collision avoidance strategy allowing users to maintain their personal space. Violating this flexible protective zone causes discomfort in real-world situations and in IVEs. However, no studies on collision avoidance for small-scale IVEs have been conducted yet. Our goal is to close this gap by presenting the results of a controlled user study in a CAVE. 27 participants were immersed in a small-scale office with the task of reaching the office door. Their way was blocked either by a male or female VA, representing their co-worker. The VA showed different behavioral patterns regarding gaze and locomotion. Our results indicate that participants preferred collaborative collision avoidance: they expect the VA to step aside in order to get more space to pass while being willing to adapt their own walking paths.
{"title":"Collision avoidance in the presence of a virtual agent in small-scale virtual environments","authors":"A. Bönsch, B. Weyers, J. Wendt, Sebastian Freitag, T. Kuhlen","doi":"10.1109/3DUI.2016.7460045","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460045","url":null,"abstract":"Computer-controlled, human-like virtual agents (VAs), are often embedded into immersive virtual environments (IVEs) in order to enliven a scene or to assist users. Certain constraints need to be fulfilled, e.g., a collision avoidance strategy allowing users to maintain their personal space. Violating this flexible protective zone causes discomfort in real-world situations and in IVEs. However, no studies on collision avoidance for small-scale IVEs have been conducted yet. Our goal is to close this gap by presenting the results of a controlled user study in a CAVE. 27 participants were immersed in a small-scale office with the task of reaching the office door. Their way was blocked either by a male or female VA, representing their co-worker. The VA showed different behavioral patterns regarding gaze and locomotion. Our results indicate that participants preferred collaborative collision avoidance: they expect the VA to step aside in order to get more space to pass while being willing to adapt their own walking paths.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125015366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-19DOI: 10.1109/3DUI.2016.7460049
Koheiushima, Kenneth R. Moser, D. Rompapas, J. Swan, Sei Ikeda, Goshiro Yamamoto, Takafumi Taketomi, C. Sandor, H. Kato
Augmented Reality (AR) systems, which utilize optical see-through head-mounted displays, are becoming more common place, with several consumer level options already available, and the promise of additional, more advanced, devices on the horizon. A common factor among current generation optical see-through devices, though, is fixed focal distance to virtual content. While fixed focus is not a concern for video see-through AR, since both virtual and real world imagery are combined into a single image by the display, unequal distances between real world objects and the virtual display screen in optical see-through AR is unavoidable. In this work, we investigate the issue of focus blur, in particular, the blurring caused by simultaneously viewing virtual content and physical objects in the environment at differing focal distances. We additionally examine the application of dynamic sharpening filters as a straight forward, system independent, means for mitigating this effect improving the clarity of defocused AR content. We assess the utility of this method, termed SharpView, by employing an adjustment experiment in which users actively apply varying amounts of sharpening to reduce the perception of blur in AR content shown at four focal disparity levels relative to real world imagery. Our experimental results confirm that dynamic correction schemes are required for adequately addressing the presence of blur in Optical See-Through AR. Furthermore, we validate the ability of our SharpView model to improve the perceived visual clarity of focus blurred content, with optimal performance at focal differences well suited for near field AR applications.
{"title":"SharpView: Improved clarity of defocused content on optical see-through head-mounted displays","authors":"Koheiushima, Kenneth R. Moser, D. Rompapas, J. Swan, Sei Ikeda, Goshiro Yamamoto, Takafumi Taketomi, C. Sandor, H. Kato","doi":"10.1109/3DUI.2016.7460049","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460049","url":null,"abstract":"Augmented Reality (AR) systems, which utilize optical see-through head-mounted displays, are becoming more common place, with several consumer level options already available, and the promise of additional, more advanced, devices on the horizon. A common factor among current generation optical see-through devices, though, is fixed focal distance to virtual content. While fixed focus is not a concern for video see-through AR, since both virtual and real world imagery are combined into a single image by the display, unequal distances between real world objects and the virtual display screen in optical see-through AR is unavoidable. In this work, we investigate the issue of focus blur, in particular, the blurring caused by simultaneously viewing virtual content and physical objects in the environment at differing focal distances. We additionally examine the application of dynamic sharpening filters as a straight forward, system independent, means for mitigating this effect improving the clarity of defocused AR content. We assess the utility of this method, termed SharpView, by employing an adjustment experiment in which users actively apply varying amounts of sharpening to reduce the perception of blur in AR content shown at four focal disparity levels relative to real world imagery. Our experimental results confirm that dynamic correction schemes are required for adequately addressing the presence of blur in Optical See-Through AR. Furthermore, we validate the ability of our SharpView model to improve the perceived visual clarity of focus blurred content, with optimal performance at focal differences well suited for near field AR applications.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128976873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-19DOI: 10.1109/3DUI.2016.7460055
J. A. Jones, Darlene E. Edewaard, R. Tyrrell, L. Hodges
This paper presents a schematic eye model designed for use by virtual environments researchers and practitioners. This model, based on a combination of several ophthalmic models, attempts to very closely approximate a user's optical centers and intraocular separation using as little as a single measurement of pupillary distance (PD). Typically, these parameters are loosely approximated based on the PD of the user while converged to some known distance. However, this may not be sufficient for users to accurately perform spatially sensitive tasks in the near field. We investigate this possibility by comparing the impact of several common PD-based models and our schematic eye model on users' ability to accurately match real and virtual targets in depth. This was done using a specially designed display and robotic positioning apparatus that allowed sub-millimeter measurement of target positions and user responses. We found that the schematic eye model resulted in significantly improved real to virtual matches with average accuracy, in some cases, well under 1mm. We also present a novel, low-cost method of accurately measuring PD using an off-the-shelf trial frame and pinhole filters. We validated this method by comparing its measurements against those taken using an ophthalmic autorefractor. Significant differences were not found between the two methods.
{"title":"A schematic eye for virtual environments","authors":"J. A. Jones, Darlene E. Edewaard, R. Tyrrell, L. Hodges","doi":"10.1109/3DUI.2016.7460055","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460055","url":null,"abstract":"This paper presents a schematic eye model designed for use by virtual environments researchers and practitioners. This model, based on a combination of several ophthalmic models, attempts to very closely approximate a user's optical centers and intraocular separation using as little as a single measurement of pupillary distance (PD). Typically, these parameters are loosely approximated based on the PD of the user while converged to some known distance. However, this may not be sufficient for users to accurately perform spatially sensitive tasks in the near field. We investigate this possibility by comparing the impact of several common PD-based models and our schematic eye model on users' ability to accurately match real and virtual targets in depth. This was done using a specially designed display and robotic positioning apparatus that allowed sub-millimeter measurement of target positions and user responses. We found that the schematic eye model resulted in significantly improved real to virtual matches with average accuracy, in some cases, well under 1mm. We also present a novel, low-cost method of accurately measuring PD using an off-the-shelf trial frame and pinhole filters. We validated this method by comparing its measurements against those taken using an ophthalmic autorefractor. Significant differences were not found between the two methods.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130678123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-19DOI: 10.1109/3DUI.2016.7460032
Mahdi Azmandian, Timofey Grechkin, M. Bolas, Evan A. Suma
Redirected walking techniques have been introduced to overcome physical space limitations for natural locomotion in virtual reality. These techniques decouple real and virtual user trajectories by subtly steering the user away from the boundaries of the physical space while maintaining the illusion that the user follows the intended virtual path. Effectiveness of redirection algorithms can significantly improve when a reliable prediction of the users future virtual path is available. In current solutions, the future user trajectory is predicted based on non-standardized manual annotations of the environment structure, which is both tedious and inflexible. We propose a method for automatically generating environment annotation graphs and predicting the user trajectory using navigation meshes. We discuss the integration of this method with existing redirected walking algorithms such as FORCE and MPCRed. Automated annotation of the virtual environments structure enables simplified deployment of these algorithms in any virtual environment.
{"title":"Automated path prediction for redirected walking using navigation meshes","authors":"Mahdi Azmandian, Timofey Grechkin, M. Bolas, Evan A. Suma","doi":"10.1109/3DUI.2016.7460032","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460032","url":null,"abstract":"Redirected walking techniques have been introduced to overcome physical space limitations for natural locomotion in virtual reality. These techniques decouple real and virtual user trajectories by subtly steering the user away from the boundaries of the physical space while maintaining the illusion that the user follows the intended virtual path. Effectiveness of redirection algorithms can significantly improve when a reliable prediction of the users future virtual path is available. In current solutions, the future user trajectory is predicted based on non-standardized manual annotations of the environment structure, which is both tedious and inflexible. We propose a method for automatically generating environment annotation graphs and predicting the user trajectory using navigation meshes. We discuss the integration of this method with existing redirected walking algorithms such as FORCE and MPCRed. Automated annotation of the virtual environments structure enables simplified deployment of these algorithms in any virtual environment.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131303610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-19DOI: 10.1109/3DUI.2016.7460051
Themis Omirou, A. Pérez, S. Subramanian, A. Roudaut
Charts are graphical representations of numbers that help us to extract trends, relations and in general to have a better understanding of data. For this reason, multiple systems have been developed to display charts in a digital or physical manner. Here, we introduce Floating Charts, a modular display that utilizes acoustic levitation for positioning free-floating objects. Multiple objects are individually levitated to compose a dynamic floating chart with the ability to move in real time to reflect changes in data. Floating objects can have different sizes and colours to represent extra information. Additionally, they can be levitated across other physical structures to improve depth perception. We present the system design, a technical evaluation and a catalogue of chart variations.
{"title":"Floating charts: Data plotting using free-floating acoustically levitated representations","authors":"Themis Omirou, A. Pérez, S. Subramanian, A. Roudaut","doi":"10.1109/3DUI.2016.7460051","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460051","url":null,"abstract":"Charts are graphical representations of numbers that help us to extract trends, relations and in general to have a better understanding of data. For this reason, multiple systems have been developed to display charts in a digital or physical manner. Here, we introduce Floating Charts, a modular display that utilizes acoustic levitation for positioning free-floating objects. Multiple objects are individually levitated to compose a dynamic floating chart with the ability to move in real time to reflect changes in data. Floating objects can have different sizes and colours to represent extra information. Additionally, they can be levitated across other physical structures to improve depth perception. We present the system design, a technical evaluation and a catalogue of chart variations.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"101-102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132978871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-19DOI: 10.1109/3DUI.2016.7460061
P. Haynes, Eckart Lange
We present a prototype augmented reality (AR) app for flood visualisation using techniques of in situ geometry modeling and constructive solid geometry (CSG). Natural and augmented point correspondences are computed using a method of interactive triangulation. Prototype geometry is oriented to pairs of triangulated points to model buildings and other structures within the scene. A CSG difference operation between a plane and the geometry produces the virtual flood plane, which can be translated vertically. Registration and tracking is achieved using the Qualcomm Vuforia software development kit (SDK). Focus is given to the means with which the objective is achieved using readily available technology.
{"title":"In-situ flood visualisation using mobile AR","authors":"P. Haynes, Eckart Lange","doi":"10.1109/3DUI.2016.7460061","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460061","url":null,"abstract":"We present a prototype augmented reality (AR) app for flood visualisation using techniques of in situ geometry modeling and constructive solid geometry (CSG). Natural and augmented point correspondences are computed using a method of interactive triangulation. Prototype geometry is oriented to pairs of triangulated points to model buildings and other structures within the scene. A CSG difference operation between a plane and the geometry produces the virtual flood plane, which can be translated vertically. Registration and tracking is achieved using the Qualcomm Vuforia software development kit (SDK). Focus is given to the means with which the objective is achieved using readily available technology.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115755547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-19DOI: 10.1109/3DUI.2016.7460081
Leonardo Pavanatto Soares, Thomas Volpato de Oliveira, Vicenzo Abichequer Sangalli, M. Pinho, Regis Kopper
Supposing that, in a system operated by two users in different positions, it is easier for one of them to perform some operations, we developed a 3D User Interface (3DUI) that allows two users to interact together with an object, using the three modification operations (scale, rotate and translate) to reach a goal. The operations can be performed using two augmented reality cubes, which can obtain up to 6 degrees of freedom, and every user can select any operation by using a button on the keyboard to cycle through them. To the cubes are assigned two different points of view: an exocentric view, where the user will stand at a given distance from the object, with a point of view similar to the one of a human being; and an egocentric view, where the user will stand much closer to the object, having the point of view from the object's perspective. These points of view are locked to each user, which means that one user cannot use both views, just the one assigned to his ID. The cameras have a small margin of movement, allowing just a tilt to the sides, according to the Oculus's movements. With these features, this 3DUI aims to test which point of view is better for each operation, and how the degrees of freedom should be separated between the users.
{"title":"Collaborative hybrid virtual environment","authors":"Leonardo Pavanatto Soares, Thomas Volpato de Oliveira, Vicenzo Abichequer Sangalli, M. Pinho, Regis Kopper","doi":"10.1109/3DUI.2016.7460081","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460081","url":null,"abstract":"Supposing that, in a system operated by two users in different positions, it is easier for one of them to perform some operations, we developed a 3D User Interface (3DUI) that allows two users to interact together with an object, using the three modification operations (scale, rotate and translate) to reach a goal. The operations can be performed using two augmented reality cubes, which can obtain up to 6 degrees of freedom, and every user can select any operation by using a button on the keyboard to cycle through them. To the cubes are assigned two different points of view: an exocentric view, where the user will stand at a given distance from the object, with a point of view similar to the one of a human being; and an egocentric view, where the user will stand much closer to the object, having the point of view from the object's perspective. These points of view are locked to each user, which means that one user cannot use both views, just the one assigned to his ID. The cameras have a small margin of movement, allowing just a tilt to the sides, according to the Oculus's movements. With these features, this 3DUI aims to test which point of view is better for each operation, and how the degrees of freedom should be separated between the users.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115574768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-19DOI: 10.1109/3DUI.2016.7460028
Swaroop K. Pal, Marriam Khan, Ryan P. McMahan
There are three common types of head tracking provided by virtual reality (VR) systems based on their degrees of freedom (DOF): complete 6-DOF, rotational 3-DOF, and translational 3-DOF. Prior research has indicated that complete 6-DOF head tracking provides significantly better user performance than not having head tracking, but there is little to no research comparing the three common types of head tracking. In this paper, we present one of the first studies to investigate and compare the effects of complete head tracking, rotational head tracking, and translational head tracking. The results of this study indicate that translational head tracking was significantly worse than complete and rotational head tracking, in terms of task time, task errors, reported usability, and presence. Surprisingly, we did not find any significant differences between complete and rotational head tracking. We discuss potential reasons why, in addition to the implications of the results.
{"title":"The benefits of rotational head tracking","authors":"Swaroop K. Pal, Marriam Khan, Ryan P. McMahan","doi":"10.1109/3DUI.2016.7460028","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460028","url":null,"abstract":"There are three common types of head tracking provided by virtual reality (VR) systems based on their degrees of freedom (DOF): complete 6-DOF, rotational 3-DOF, and translational 3-DOF. Prior research has indicated that complete 6-DOF head tracking provides significantly better user performance than not having head tracking, but there is little to no research comparing the three common types of head tracking. In this paper, we present one of the first studies to investigate and compare the effects of complete head tracking, rotational head tracking, and translational head tracking. The results of this study indicate that translational head tracking was significantly worse than complete and rotational head tracking, in terms of task time, task errors, reported usability, and presence. Surprisingly, we did not find any significant differences between complete and rotational head tracking. We discuss potential reasons why, in addition to the implications of the results.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125035191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-19DOI: 10.1109/3DUI.2016.7460050
Alex Stamm, Patrick Teall, Guillermo Blanco Benedicto
This project looks into creating an augmented virtuality pre-visualization system to empower indie filmmakers during the onset production process. Indie directors are currently unable to pre-visualize their virtual set without the funds to pay for a high-fidelity 3D visualization system. Our team has created a pre-visualization prototype that allows independent filmmakers to perform augmented virtuality by placing actors into a computer-generated 3D environment for the purposes of virtual production. After performing our preliminary usability research, we have determined a clear and effective 3D interface for film directors to use during the production process. The implication for this research sets the groundwork for building a pre-visualization system for on-set production that satisfies independent and emerging filmmakers.
{"title":"Augmented virtuality in real time for pre-visualization in film","authors":"Alex Stamm, Patrick Teall, Guillermo Blanco Benedicto","doi":"10.1109/3DUI.2016.7460050","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460050","url":null,"abstract":"This project looks into creating an augmented virtuality pre-visualization system to empower indie filmmakers during the onset production process. Indie directors are currently unable to pre-visualize their virtual set without the funds to pay for a high-fidelity 3D visualization system. Our team has created a pre-visualization prototype that allows independent filmmakers to perform augmented virtuality by placing actors into a computer-generated 3D environment for the purposes of virtual production. After performing our preliminary usability research, we have determined a clear and effective 3D interface for film directors to use during the production process. The implication for this research sets the groundwork for building a pre-visualization system for on-set production that satisfies independent and emerging filmmakers.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"170 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133379839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-19DOI: 10.1109/3DUI.2016.7460076
Naëm Baron
Collaboration in virtual environments (VEs) is important as it offers a new perspective on interactions with and within these environments. We propose a 3D manipulation method designed for a multi-user scenario, taking advantage of the extended information available to all users. CollaborativeConstraint (ColCo) is a simple method to perform canonical 3D manipulation operations by mean of a 3D user interface (UI). It is focused on collaborative tasks in virtual environments based on constraints definition. The communication needs are reduced as much as possible by using easy to understand synchronization mechanism and visual feedbacks. In this paper we present the ColCo concept in detail and demonstrate its application with a test setup.
{"title":"CollaborativeConstraint: UI for collaborative 3D manipulation operations","authors":"Naëm Baron","doi":"10.1109/3DUI.2016.7460076","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460076","url":null,"abstract":"Collaboration in virtual environments (VEs) is important as it offers a new perspective on interactions with and within these environments. We propose a 3D manipulation method designed for a multi-user scenario, taking advantage of the extended information available to all users. CollaborativeConstraint (ColCo) is a simple method to perform canonical 3D manipulation operations by mean of a 3D user interface (UI). It is focused on collaborative tasks in virtual environments based on constraints definition. The communication needs are reduced as much as possible by using easy to understand synchronization mechanism and visual feedbacks. In this paper we present the ColCo concept in detail and demonstrate its application with a test setup.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130445804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}