Pub Date : 2016-03-19DOI: 10.1109/3DUI.2016.7460073
P. Saalfeld, S. Glaßer, O. Beuing, Mandy Grundmann, B. Preim
In clinical practice, sketches support physicians in treatment planning. For example, they are employed as direct annotations in medical image data. However, this approach leads to occlusions in case of spatially complex 3D representations of anatomical structures such as vascular systems. To overcome this limitation, we developed a framework which enables the physician to create annotations by freely sketching in 3D environment. We solve the problem of occlusions by an animated representation of the original and unfolded vascular structure with interactive unfolding. For this, we use a semi-immersive stereoscopic display and a stylus with ray-based interaction techniques.
{"title":"3D sketching on interactively unfolded vascular structures for treatment planning","authors":"P. Saalfeld, S. Glaßer, O. Beuing, Mandy Grundmann, B. Preim","doi":"10.1109/3DUI.2016.7460073","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460073","url":null,"abstract":"In clinical practice, sketches support physicians in treatment planning. For example, they are employed as direct annotations in medical image data. However, this approach leads to occlusions in case of spatially complex 3D representations of anatomical structures such as vascular systems. To overcome this limitation, we developed a framework which enables the physician to create annotations by freely sketching in 3D environment. We solve the problem of occlusions by an animated representation of the original and unfolded vascular structure with interactive unfolding. For this, we use a semi-immersive stereoscopic display and a stylus with ray-based interaction techniques.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134084766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-19DOI: 10.1109/3DUI.2016.7460074
E. Tanner, S. Savadatti, Benjamin Manning, K. Johnsen
We present the design of a novel educational virtual reality application to aid undergraduate engineering students in understanding the basic principles of fluid mechanics. In addition, we report results from a field study evaluating the usability and cognitive benefits of a mobile tracked display (MTD) for this application.
{"title":"Usability and cognitive benefits of a mobile tracked display in virtual laboratories for engineering education","authors":"E. Tanner, S. Savadatti, Benjamin Manning, K. Johnsen","doi":"10.1109/3DUI.2016.7460074","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460074","url":null,"abstract":"We present the design of a novel educational virtual reality application to aid undergraduate engineering students in understanding the basic principles of fluid mechanics. In addition, we report results from a field study evaluating the usability and cognitive benefits of a mobile tracked display (MTD) for this application.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124660757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-19DOI: 10.1109/3DUI.2016.7460048
K. Ponto, D. Lisowski, S. Fan
This paper presents a proof-of-concept system that enables the integrated virtual and physical traversal of a space through locomotion, automatic treadmill, and stage based flying system. The automatic treadmill enables the user to walk or run without manual intervention while the flying system enables the user to control their height above the stage using a gesture-based control scheme. The system is showcased through a live performance event that demonstrates the ability to put the actor in active control of the performance. This approach enables a new performance methodology with exciting new options for theatrical storytelling, educational training, and interactive entertainment.
{"title":"Designing extreme 3D user interfaces for augmented live performances","authors":"K. Ponto, D. Lisowski, S. Fan","doi":"10.1109/3DUI.2016.7460048","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460048","url":null,"abstract":"This paper presents a proof-of-concept system that enables the integrated virtual and physical traversal of a space through locomotion, automatic treadmill, and stage based flying system. The automatic treadmill enables the user to walk or run without manual intervention while the flying system enables the user to control their height above the stage using a gesture-based control scheme. The system is showcased through a live performance event that demonstrates the ability to put the actor in active control of the performance. This approach enables a new performance methodology with exciting new options for theatrical storytelling, educational training, and interactive entertainment.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115950437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-19DOI: 10.1109/3DUI.2016.7460038
Keigo Matsumoto, Yuki Ban, Takuji Narumi, T. Tanikawa, M. Hirose
This paper proposes a method for improving the effects of redirected walking (RDW) by using haptic cues, particularly for the discrimination of path curvature. Some research has shown that users can be redirected on a circular arc with a radius of at least 22 m without being able to detect the inconsistency by showing a straight path in the virtual world. However, this is still too large to enable the presentation of a demonstration in a restricted space. We develop an RDW system, which displays a visual representation of a flat wall and users virtually walk straight along it although, in reality, users walk along a convex surface wall with touching it. Using this system, we conduct an experiment, and the results show that our method reduced the amount of perceived curvature in RDW down to 62% as compared to an only visual method.
{"title":"Curvature manipulation techniques in redirection using haptic cues","authors":"Keigo Matsumoto, Yuki Ban, Takuji Narumi, T. Tanikawa, M. Hirose","doi":"10.1109/3DUI.2016.7460038","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460038","url":null,"abstract":"This paper proposes a method for improving the effects of redirected walking (RDW) by using haptic cues, particularly for the discrimination of path curvature. Some research has shown that users can be redirected on a circular arc with a radius of at least 22 m without being able to detect the inconsistency by showing a straight path in the virtual world. However, this is still too large to enable the presentation of a demonstration in a restricted space. We develop an RDW system, which displays a visual representation of a flat wall and users virtually walk straight along it although, in reality, users walk along a convex surface wall with touching it. Using this system, we conduct an experiment, and the results show that our method reduced the amount of perceived curvature in RDW down to 62% as compared to an only visual method.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116416062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-19DOI: 10.1109/3DUI.2016.7460054
E. Langbehn, G. Bruder, Frank Steinicke
Multi-scale collaborative virtual environments (MCVEs) provide an important platform for many 3D application domains as they allow several users to cooperate in a virtual environment (VE) at different scale levels, ranging from magnified detail views to minified overall views. However, in such MCVEs, the natural relations between a user's self-representation, i. e., her virtual body, and the environment in terms of size, scale, proportion, capabilities, or affordances are subject to change during the interaction. In this paper we describe how the type of the environment, virtual self-representation of our body, as well as presence of other avatars affects our estimation of dominant scale, i. e., the scale level relative to which we make spatial judgments, plan actions and interpret other users' actions in MCVEs. We present a pilot study, which highlights the problem domain, and two psychophysical experiments, in which we analyzed how the different factors in MCVEs affect the estimation of dominant scale and thus shape perception and action in MCVEs. Our results show an effect of the above-mentioned aspects on the estimation of dominant scale. In particular, our results show interpersonal differences as well as a group effect, which reveals that participants estimated the common scale level of a group of other avatars as dominant scale, even if the participant's own scale or the environment scale deviated from the other avatars' common scale. We discuss implications and guidelines for the development of MCVEs.
{"title":"Scale matters! Analysis of dominant scale estimation in the presence of conflicting cues in multi-scale collaborative virtual environments","authors":"E. Langbehn, G. Bruder, Frank Steinicke","doi":"10.1109/3DUI.2016.7460054","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460054","url":null,"abstract":"Multi-scale collaborative virtual environments (MCVEs) provide an important platform for many 3D application domains as they allow several users to cooperate in a virtual environment (VE) at different scale levels, ranging from magnified detail views to minified overall views. However, in such MCVEs, the natural relations between a user's self-representation, i. e., her virtual body, and the environment in terms of size, scale, proportion, capabilities, or affordances are subject to change during the interaction. In this paper we describe how the type of the environment, virtual self-representation of our body, as well as presence of other avatars affects our estimation of dominant scale, i. e., the scale level relative to which we make spatial judgments, plan actions and interpret other users' actions in MCVEs. We present a pilot study, which highlights the problem domain, and two psychophysical experiments, in which we analyzed how the different factors in MCVEs affect the estimation of dominant scale and thus shape perception and action in MCVEs. Our results show an effect of the above-mentioned aspects on the estimation of dominant scale. In particular, our results show interpersonal differences as well as a group effect, which reveals that participants estimated the common scale level of a group of other avatars as dominant scale, even if the participant's own scale or the environment scale deviated from the other avatars' common scale. We discuss implications and guidelines for the development of MCVEs.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134266247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-19DOI: 10.1109/3DUI.2016.7460062
Joseph Isaac, Sabarish V. Babu
Video games have grown in popularity since the 1980's. The largest consumers of video games are youth populations. Previous research has shown cognitive development and learning principles in video games. As a result, there is an increasing interest in games being teaching tools. Gamification is the use of video game elements in non-game applications. In this paper, I proposed a design to a study of applying gamification to a computer programming software, VENVI, in order to promote motivation, engagement, and computational thinking.
{"title":"Supporting computational thinking through gamification","authors":"Joseph Isaac, Sabarish V. Babu","doi":"10.1109/3DUI.2016.7460062","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460062","url":null,"abstract":"Video games have grown in popularity since the 1980's. The largest consumers of video games are youth populations. Previous research has shown cognitive development and learning principles in video games. As a result, there is an increasing interest in games being teaching tools. Gamification is the use of video game elements in non-game applications. In this paper, I proposed a design to a study of applying gamification to a computer programming software, VENVI, in order to promote motivation, engagement, and computational thinking.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131206899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-19DOI: 10.1109/3DUI.2016.7460043
David J. Zielinski, M. Sommer, H. Rao, L. G. Appelbaum, Nicholas D. Potter, Regis Kopper
User performance in virtual environments with degraded visual conditions due to low frame rates is an interesting area of inquiry. Visual content shown in a low frame rate simulation has the quality of the original image, but persists for an extended period until the next frame is displayed (so-called high persistence-HP). An alternative, called low persistence (LP), involves displaying the rendered frame for a single display frame and blanking the screen while waiting for the next frame to be generated. Previous research has evaluated the usefulness of the LP technique in low frame rate simulations during a static target acquisition task. To gain greater knowledge about the LP technique, we have conducted a user study to evaluate user performance and learning during a dynamic target acquisition task. The acquisition task was evaluated under a high frame rate, (60 fps) condition, a traditional low frame rate HP condition (10 fps), and the experimental low frame rate LP technique. The task involved the acquisition of targets moving along several different trajectories, modeled after a shotgun trap shooting task. The results of our study indicate the LP condition approaches high frame rate performance within certain classes of target trajectories. Interestingly we also see that learning is consistent across conditions, indicating that it may not always be necessary to train under a visually high frame rate system to learn a particular task. We discuss implications of using the LP technique to mitigate low frame rate issues as well as its potential usefulness for training in low frame rate virtual environments.
{"title":"Evaluating the effects of image persistence on dynamic target acquisition in low frame rate virtual environments","authors":"David J. Zielinski, M. Sommer, H. Rao, L. G. Appelbaum, Nicholas D. Potter, Regis Kopper","doi":"10.1109/3DUI.2016.7460043","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460043","url":null,"abstract":"User performance in virtual environments with degraded visual conditions due to low frame rates is an interesting area of inquiry. Visual content shown in a low frame rate simulation has the quality of the original image, but persists for an extended period until the next frame is displayed (so-called high persistence-HP). An alternative, called low persistence (LP), involves displaying the rendered frame for a single display frame and blanking the screen while waiting for the next frame to be generated. Previous research has evaluated the usefulness of the LP technique in low frame rate simulations during a static target acquisition task. To gain greater knowledge about the LP technique, we have conducted a user study to evaluate user performance and learning during a dynamic target acquisition task. The acquisition task was evaluated under a high frame rate, (60 fps) condition, a traditional low frame rate HP condition (10 fps), and the experimental low frame rate LP technique. The task involved the acquisition of targets moving along several different trajectories, modeled after a shotgun trap shooting task. The results of our study indicate the LP condition approaches high frame rate performance within certain classes of target trajectories. Interestingly we also see that learning is consistent across conditions, indicating that it may not always be necessary to train under a visually high frame rate system to learn a particular task. We discuss implications of using the LP technique to mitigate low frame rate issues as well as its potential usefulness for training in low frame rate virtual environments.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115650323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-19DOI: 10.1109/3DUI.2016.7460025
A. Simeone
Research on 3D interaction has explored the application of multi-touch technologies to 3D stereoscopic displays. However, the ability to perceive 3D objects at different depths (in front or behind the screen surface) conflicts with the necessity of expressing inputs on the screen surface. Touching the screen increases the risk of causing the vergence-accommodation conflict which can lead to the loss of the stereoscopic effect or cause discomfort. In this work, we present two studies evaluating a novel approach based on the concept of indirect touch interaction via an external multi-touch device. We compare indirect touch techniques to two state-of-the-art 3D interaction techniques: DS3 and the Triangle Cursor. The first study offers a quantitative and qualitative study of direct and indirect interaction on a 4 DOF docking task. The second presents a follow-up experiment focusing on a 6 DOF docking task. Results show that indirect touch interaction techniques provide a more comfortable viewing experience than both techniques. It also shows that there are no drawbacks when switching to indirect touch, as their performances in terms of net manipulation times are comparable.
{"title":"Indirect touch manipulation for interaction with stereoscopic displays","authors":"A. Simeone","doi":"10.1109/3DUI.2016.7460025","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460025","url":null,"abstract":"Research on 3D interaction has explored the application of multi-touch technologies to 3D stereoscopic displays. However, the ability to perceive 3D objects at different depths (in front or behind the screen surface) conflicts with the necessity of expressing inputs on the screen surface. Touching the screen increases the risk of causing the vergence-accommodation conflict which can lead to the loss of the stereoscopic effect or cause discomfort. In this work, we present two studies evaluating a novel approach based on the concept of indirect touch interaction via an external multi-touch device. We compare indirect touch techniques to two state-of-the-art 3D interaction techniques: DS3 and the Triangle Cursor. The first study offers a quantitative and qualitative study of direct and indirect interaction on a 4 DOF docking task. The second presents a follow-up experiment focusing on a 6 DOF docking task. Results show that indirect touch interaction techniques provide a more comfortable viewing experience than both techniques. It also shows that there are no drawbacks when switching to indirect touch, as their performances in terms of net manipulation times are comparable.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"110 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129075309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-19DOI: 10.1109/3DUI.2016.7460036
Victor Adriel de Jesus Oliveira, L. Nedel, Anderson Maciel
In this paper, we look upon elements present in speech articulation to introduce proactive haptic articulation as a novel approach for communication in Collaborative Virtual Environments. We defend the hypothesis that elements present in natural language, when added to the design of the vibrotactile vocabulary, should provide an expressive medium for intercommunication. Moreover, the ability to render tactile cues to a teammate should encourage users to extrapolate a given vocabulary while using it. We implemented a collaborative puzzle task to observe the use of such vocabulary. Results show that participants autonomously adapted it to attend their communication needs during the assembly.
{"title":"Proactive haptic articulation for intercommunication in collaborative virtual environments","authors":"Victor Adriel de Jesus Oliveira, L. Nedel, Anderson Maciel","doi":"10.1109/3DUI.2016.7460036","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460036","url":null,"abstract":"In this paper, we look upon elements present in speech articulation to introduce proactive haptic articulation as a novel approach for communication in Collaborative Virtual Environments. We defend the hypothesis that elements present in natural language, when added to the design of the vibrotactile vocabulary, should provide an expressive medium for intercommunication. Moreover, the ability to render tactile cues to a teammate should encourage users to extrapolate a given vocabulary while using it. We implemented a collaborative puzzle task to observe the use of such vocabulary. Results show that participants autonomously adapted it to attend their communication needs during the assembly.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129152266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-19DOI: 10.1109/3DUI.2016.7460034
S. Islam, B. Ionescu, C. Gadea, D. Ionescu
The increased availability of consumer-grade virtual reality (VR) head-mounted displays (HMD) has created significant demand for affordable and reliable 3D input devices that can be used to control 3D user interfaces. Accurate positioning of a user's body within the virtual environment is essential in order to provide users with convincing and interactive VR experiences. Existing full-body motion tracking systems from academia and industry have suffered from problems of occlusion and accumulated sensor error while often lacking absolute positional tracking. This paper describes a wireless Sensor Array System that uses multiple inertial measurement units (IMUs) for calculating the complete pose of a user's body. The system corrects gyroscope errors by using magnetic sensor data. The Sensor Array System is augmented by a positional tracking system that consists of a rotary-laser base station and a photodiode-based tracked object worn on the user's torso. The base station emits horizontal and vertical laser lines that sweep across the environment in sequence. With the known configuration of the photodiode constellation, the position and orientation of the tracked object can be determined with high accuracy, low latency, and low computational overhead. As will be shown, the sensor fusion algorithms used result with a full-body tracking system that can be applied to a wide variety of 3D applications and interfaces.
{"title":"Full-body tracking using a sensor array system and laser-based sweeps","authors":"S. Islam, B. Ionescu, C. Gadea, D. Ionescu","doi":"10.1109/3DUI.2016.7460034","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460034","url":null,"abstract":"The increased availability of consumer-grade virtual reality (VR) head-mounted displays (HMD) has created significant demand for affordable and reliable 3D input devices that can be used to control 3D user interfaces. Accurate positioning of a user's body within the virtual environment is essential in order to provide users with convincing and interactive VR experiences. Existing full-body motion tracking systems from academia and industry have suffered from problems of occlusion and accumulated sensor error while often lacking absolute positional tracking. This paper describes a wireless Sensor Array System that uses multiple inertial measurement units (IMUs) for calculating the complete pose of a user's body. The system corrects gyroscope errors by using magnetic sensor data. The Sensor Array System is augmented by a positional tracking system that consists of a rotary-laser base station and a photodiode-based tracked object worn on the user's torso. The base station emits horizontal and vertical laser lines that sweep across the environment in sequence. With the known configuration of the photodiode constellation, the position and orientation of the tracked object can be determined with high accuracy, low latency, and low computational overhead. As will be shown, the sensor fusion algorithms used result with a full-body tracking system that can be applied to a wide variety of 3D applications and interfaces.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128045446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}