In this paper, we present Haptic Prop, a semi-passive, pico-powered, tangible prop, which is able to provide programmable friction for interaction with a tabletop setup, such as interactive workbenches or fish-tank VR. We explore the interaction space, its basic components, and constraints. Haptic Prop can be used to provide haptic feedback to the user at different levels and in different directions. We have conducted a preliminary user study evaluating the users' acceptance for the device and their ability to detect the programmed level of friction for rotation and linear movements. While currently still preliminary, the results demonstrate the utility of our device and outline some promising directions for future work.
{"title":"Haptic Prop: A Tangible Prop for Semi-passive Haptic Interaction","authors":"Dimitar Valkov, Andreas Mantler, L. Linsen","doi":"10.1109/VR.2019.8797718","DOIUrl":"https://doi.org/10.1109/VR.2019.8797718","url":null,"abstract":"In this paper, we present Haptic Prop, a semi-passive, pico-powered, tangible prop, which is able to provide programmable friction for interaction with a tabletop setup, such as interactive workbenches or fish-tank VR. We explore the interaction space, its basic components, and constraints. Haptic Prop can be used to provide haptic feedback to the user at different levels and in different directions. We have conducted a preliminary user study evaluating the users' acceptance for the device and their ability to detect the programmed level of friction for rotation and linear movements. While currently still preliminary, the results demonstrate the utility of our device and outline some promising directions for future work.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122252450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Raehyuk Jung, Aiden Seung Joon Lee, Amirsaman Ashtari, J. Bazin
Spherical VR cameras can capture high-quality immersive VR images with a 360° field of view. However, in practice, when the camera orientation is not straight, the acquired VR image appears tilted when displayed on a VR headset, which diminishes the quality of the VR experience. To overcome this problem, we present a deep learning-based approach that can automatically estimate the orientation of a VR image and return its upright version. In contrast to existing methods, our approach does not require the presence of lines or horizon in the image, and thus can be applied on a wide range of scenes. Extensive experiments and comparisons with state-of-the-art methods have successfully confirmed the validity of our approach.
{"title":"Deep360Up: A Deep Learning-Based Approach for Automatic VR Image Upright Adjustment","authors":"Raehyuk Jung, Aiden Seung Joon Lee, Amirsaman Ashtari, J. Bazin","doi":"10.1109/VR.2019.8798326","DOIUrl":"https://doi.org/10.1109/VR.2019.8798326","url":null,"abstract":"Spherical VR cameras can capture high-quality immersive VR images with a 360° field of view. However, in practice, when the camera orientation is not straight, the acquired VR image appears tilted when displayed on a VR headset, which diminishes the quality of the VR experience. To overcome this problem, we present a deep learning-based approach that can automatically estimate the orientation of a VR image and return its upright version. In contrast to existing methods, our approach does not require the presence of lines or horizon in the image, and thus can be applied on a wide range of scenes. Extensive experiments and comparisons with state-of-the-art methods have successfully confirmed the validity of our approach.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128188044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xingyu Xia, Chi-Man Pun, Di Zhang, Yang Yang, Huimin Lu, Hao Gao, Feng Xu
Recently, a new form of telexistence is achieved by recording images with cameras on an unmanned aerial vehicle (UAV) and displaying them to the user via a head mounted display (HMD). A key problem here is how to provide a free and natural mechanism for the user to control the viewpoint and watch a scene. To this end, we propose an improved rate-control method with an adaptive origin update (AOU) scheme. Without the aid of any auxiliary equipment, our scheme handles the self-centering problem. In addition, we present a full 6-DOF viewpoint control method to manipulate the motion of a stereo camera, and we build a real prototype to realize this by utilizing a pan-tilt-zoom (PTZ) which not only provides 2-DOF to the camera but also compensates the jittering motion of the UAV to record more stable image streams.
{"title":"A 6-DOF Telexistence Drone Controlled by a Head Mounted Display","authors":"Xingyu Xia, Chi-Man Pun, Di Zhang, Yang Yang, Huimin Lu, Hao Gao, Feng Xu","doi":"10.1109/VR.2019.8797791","DOIUrl":"https://doi.org/10.1109/VR.2019.8797791","url":null,"abstract":"Recently, a new form of telexistence is achieved by recording images with cameras on an unmanned aerial vehicle (UAV) and displaying them to the user via a head mounted display (HMD). A key problem here is how to provide a free and natural mechanism for the user to control the viewpoint and watch a scene. To this end, we propose an improved rate-control method with an adaptive origin update (AOU) scheme. Without the aid of any auxiliary equipment, our scheme handles the self-centering problem. In addition, we present a full 6-DOF viewpoint control method to manipulate the motion of a stereo camera, and we build a real prototype to realize this by utilizing a pan-tilt-zoom (PTZ) which not only provides 2-DOF to the camera but also compensates the jittering motion of the UAV to record more stable image streams.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121734983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rory M. S. Clifford, Sungchul Jung, Simon Hoerrnann, M. Billinghurst, R. Lindeman
The decisions made by an Air Attack Supervisor (AAS) helicopter co-pilots in aerial firefighting have critical and immediate impacts. It is difficult to always make fast, high quality decisions due to the mental and physical stress being experienced. Real world training exercises have limitations such as safety, cost, time and difficulty in reproducing events, making frequent training infeasible. Virtual Reality (VR) offers new training opportunities, but it is challenging to create a virtual environment with the analogous level of stress experienced in the real-world. In this paper, we investigate the use of a multi-user, collaborative, multi-sensory (vision, audio, tactile) VR system to produce a realistic training environment for practising aerial firefighting training scenarios. We focus on a comparison between our VR training system, an equivalent real-world field training and an existing radio-only exercise currently in use, where we compare Heart-Rate Variability (HRV) and self reported stress using the Short Stress State Questionnaire (SSSQ). We conducted the study with real trainee AAS firefighters to determine the effectiveness of the system. Our results show that there were no significant differences between the VR training exercise and the real-world exercise in terms of the level of stress, measured by HRV, and no significant difference between VR and radio-only exercises, as reported by the SSSQ.
{"title":"Creating a Stressful Decision Making Environment for Aerial Firefighter Training in Virtual Reality","authors":"Rory M. S. Clifford, Sungchul Jung, Simon Hoerrnann, M. Billinghurst, R. Lindeman","doi":"10.1109/VR.2019.8797889","DOIUrl":"https://doi.org/10.1109/VR.2019.8797889","url":null,"abstract":"The decisions made by an Air Attack Supervisor (AAS) helicopter co-pilots in aerial firefighting have critical and immediate impacts. It is difficult to always make fast, high quality decisions due to the mental and physical stress being experienced. Real world training exercises have limitations such as safety, cost, time and difficulty in reproducing events, making frequent training infeasible. Virtual Reality (VR) offers new training opportunities, but it is challenging to create a virtual environment with the analogous level of stress experienced in the real-world. In this paper, we investigate the use of a multi-user, collaborative, multi-sensory (vision, audio, tactile) VR system to produce a realistic training environment for practising aerial firefighting training scenarios. We focus on a comparison between our VR training system, an equivalent real-world field training and an existing radio-only exercise currently in use, where we compare Heart-Rate Variability (HRV) and self reported stress using the Short Stress State Questionnaire (SSSQ). We conducted the study with real trainee AAS firefighters to determine the effectiveness of the system. Our results show that there were no significant differences between the VR training exercise and the real-world exercise in terms of the level of stress, measured by HRV, and no significant difference between VR and radio-only exercises, as reported by the SSSQ.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115799645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Prasolova-Førland, Mikhail Fominykh, Oscar Ihlen Ekelund
This paper presents the results of the Virtual Internship project that aims to help young job seekers get insights of different workplaces via immersive and interactive experiences. We designed a concept of ‘Immersive Job Taste’ that provides a rich presentation of occupations with elements of workplace training, targeting a specific group of young job seekers, including high-school students and unemployed. We developed several scenarios and applied different virtual and augmented reality concepts to build prototypes for different types of devices. The intermediary and the final versions of the prototypes were evaluated by several groups of primary users and experts, including over 70 young job seekers and high school students and over 45 various professionals and experts. The data were collected using questionnaires and interviews. The results indicate a generally very positive attitude towards the concept of immersive job taste, although with significant differences between job seekers and experts. The prototype developed for room-scale virtual reality with controllers was generally evaluated better than those including cardboard with 360 videos or with animated 3D graphics and augmented reality glasses. In the paper, we discuss several aspects, such as the potential of immersive technologies for career guidance, fighting youth unemployment by better informing the young job seekers, and various practical and technology considerations.
{"title":"Empowering Young Job Seekers with Virtual Reality","authors":"E. Prasolova-Førland, Mikhail Fominykh, Oscar Ihlen Ekelund","doi":"10.1109/VR.2019.8798179","DOIUrl":"https://doi.org/10.1109/VR.2019.8798179","url":null,"abstract":"This paper presents the results of the Virtual Internship project that aims to help young job seekers get insights of different workplaces via immersive and interactive experiences. We designed a concept of ‘Immersive Job Taste’ that provides a rich presentation of occupations with elements of workplace training, targeting a specific group of young job seekers, including high-school students and unemployed. We developed several scenarios and applied different virtual and augmented reality concepts to build prototypes for different types of devices. The intermediary and the final versions of the prototypes were evaluated by several groups of primary users and experts, including over 70 young job seekers and high school students and over 45 various professionals and experts. The data were collected using questionnaires and interviews. The results indicate a generally very positive attitude towards the concept of immersive job taste, although with significant differences between job seekers and experts. The prototype developed for room-scale virtual reality with controllers was generally evaluated better than those including cardboard with 360 videos or with animated 3D graphics and augmented reality glasses. In the paper, we discuss several aspects, such as the potential of immersive technologies for career guidance, fighting youth unemployment by better informing the young job seekers, and various practical and technology considerations.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134327729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eye-contact is a key aspect of non-verbal human communication in everyday tasks [1]. It provides important social and emotional information that can increase the effectiveness of human communication [8]. In a conversation, eye-contact, or the lack thereof, is constantly evaluated by human brains. Conversation partners derive subjective judgment of others' credibility, focus and confidence [4] from it. Seeking another person's eye-contact is a signal for them that focus is put on that person and the main receptive senses are prepared to receive input from the other person. Likewise breaking eye-contact usually indicates distraction, loss of confidence, loss of interest or shifting focus to a different target.
{"title":"Investigating Spherical Fish Tank Virtual Reality Displays for Establishing Realistic Eye-Contact","authors":"Georg Hagemann, Qian Zhou, I. Stavness, S. Fels","doi":"10.1109/VR.2019.8797905","DOIUrl":"https://doi.org/10.1109/VR.2019.8797905","url":null,"abstract":"Eye-contact is a key aspect of non-verbal human communication in everyday tasks [1]. It provides important social and emotional information that can increase the effectiveness of human communication [8]. In a conversation, eye-contact, or the lack thereof, is constantly evaluated by human brains. Conversation partners derive subjective judgment of others' credibility, focus and confidence [4] from it. Seeking another person's eye-contact is a signal for them that focus is put on that person and the main receptive senses are prepared to receive input from the other person. Likewise breaking eye-contact usually indicates distraction, loss of confidence, loss of interest or shifting focus to a different target.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133448257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sensory conflict theory and postural instability theory were often tested individually to explain cybersickness in VR systems, but they were seldom systematically compared. An earlier study evaluated them on a large screen using 2D videos. This study evaluated sensory conflict and postural instability on the discomfort in VR. Virtual visual locomotion were shown on an head-mounted display. A motion platform vibrated in low-frequency while the participant stood on top. Each factor was manipulated alone or in combination. Results showed that the visual motion only condition led to the highest miserable score, higher than the physical vibration only condition. This suggested that consistent with previous literature, sensory conflict may be a major contributing factor of cybersickness.
{"title":"Effect of Sensory Conflict and Postural Instability on Cybersickness","authors":"A. K. T. Ng, L. Chan, H. Lau","doi":"10.1109/VR.2019.8797781","DOIUrl":"https://doi.org/10.1109/VR.2019.8797781","url":null,"abstract":"Sensory conflict theory and postural instability theory were often tested individually to explain cybersickness in VR systems, but they were seldom systematically compared. An earlier study evaluated them on a large screen using 2D videos. This study evaluated sensory conflict and postural instability on the discomfort in VR. Virtual visual locomotion were shown on an head-mounted display. A motion platform vibrated in low-frequency while the participant stood on top. Each factor was manipulated alone or in combination. Results showed that the visual motion only condition led to the highest miserable score, higher than the physical vibration only condition. This suggested that consistent with previous literature, sensory conflict may be a major contributing factor of cybersickness.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128958738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Real-time rendering of large point clouds requires acceleration structures that reduce the number of points drawn on screen. State-of-the art algorithms group and render points in hierarchically organized chunks with varying extent and density, which results in sudden changes of density from one level of detail to another, as well as noticeable popping artifacts when additional chunks are blended in or out. These popping artifacts are especially noticeable at lower levels of detail, and consequently in virtual reality, where high performance requirements impose a reduction in detail. We propose a continuous level-of-detail method that exhibits gradual rather than sudden changes in density. Our method continuously recreates a down-sampled vertex buffer from the full point cloud, based on camera orientation, position, and distance to the camera, in a point-wise rather than chunk-wise fashion and at speeds up to 17 million points per millisecond. As a result, additional details are blended in or out in a less noticeable and significantly less irritating manner as compared to the state of the art. The improved acceptance of our method was successfully evaluated in a user study.
{"title":"Real-Time Continuous Level of Detail Rendering of Point Clouds","authors":"Markus Schütz, Katharina Krösl, M. Wimmer","doi":"10.1109/VR.2019.8798284","DOIUrl":"https://doi.org/10.1109/VR.2019.8798284","url":null,"abstract":"Real-time rendering of large point clouds requires acceleration structures that reduce the number of points drawn on screen. State-of-the art algorithms group and render points in hierarchically organized chunks with varying extent and density, which results in sudden changes of density from one level of detail to another, as well as noticeable popping artifacts when additional chunks are blended in or out. These popping artifacts are especially noticeable at lower levels of detail, and consequently in virtual reality, where high performance requirements impose a reduction in detail. We propose a continuous level-of-detail method that exhibits gradual rather than sudden changes in density. Our method continuously recreates a down-sampled vertex buffer from the full point cloud, based on camera orientation, position, and distance to the camera, in a point-wise rather than chunk-wise fashion and at speeds up to 17 million points per millisecond. As a result, additional details are blended in or out in a less noticeable and significantly less irritating manner as compared to the state of the art. The improved acceptance of our method was successfully evaluated in a user study.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132948530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Hoppe, Felix Marek, F. V. D. Camp, Rainer Stietelhaqen
Immersive output and effortless input are two core aspects of a virtual reality (VR) experience. We transfer ubiquitous touch interaction with haptic feedback into a virtual environment (VE). The movable and cheap real world object supplies an accurate touch detection equal to a ray-casting interaction with a controller. Moreover, the virtual tablet extends the functionality of a real world tablet. Additional information is displayed in mid-air around the touchable area and the tablet can be turned over to interact with both sides. It allows easy to learn and precise system interaction and can even augment the established touch metaphor with new paradigms.
{"title":"VirtualTablet: Extending Movable Surfaces with Touch Interaction","authors":"A. Hoppe, Felix Marek, F. V. D. Camp, Rainer Stietelhaqen","doi":"10.1109/VR.2019.8797993","DOIUrl":"https://doi.org/10.1109/VR.2019.8797993","url":null,"abstract":"Immersive output and effortless input are two core aspects of a virtual reality (VR) experience. We transfer ubiquitous touch interaction with haptic feedback into a virtual environment (VE). The movable and cheap real world object supplies an accurate touch detection equal to a ray-casting interaction with a controller. Moreover, the virtual tablet extends the functionality of a real world tablet. Additional information is displayed in mid-air around the touchable area and the tablet can be turned over to interact with both sides. It allows easy to learn and precise system interaction and can even augment the established touch metaphor with new paradigms.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125492884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yongbin Sun, Alexandre Armengol-Urpi, S. N. Kantareddy, J. Siegel, S. Sarma
We present an Augmented Reality (AR) visualization and interaction tool for users to control Internet of Things (IoT) devices with hand gestures. Today, smart IoT devices are becoming increasingly ubiquitous with diverse forms and functions, yet most user controls over them are still limited to mobile devices and web interfaces. Recently, AR has been developed rapidly, and provided immersive solutions to enhance user experience of applications in many fields. Its capability to create immersive interactions allows AR to improve the way smart devices are controlled via more direct visual feedback. In this paper, we create a functional prototype of one such system, enabling seamless interactions with sound and lighting systems through the use of augmented hand-controlled interaction panels. To interpret users' intentions, we implement a standard 2D convolution neural network (CNN) for real-time hand gesture recognition and deploy it within our system. Our prototype is also equipped with a simple but effective object detector which can identify target devices within a proper range by analyzing geometric features. We evaluate the performance of our system qualitatively and quantitatively and demonstrate it on two smart devices.
{"title":"MagicHand: Interact with IoT Devices in Augmented Reality Environment","authors":"Yongbin Sun, Alexandre Armengol-Urpi, S. N. Kantareddy, J. Siegel, S. Sarma","doi":"10.1109/VR.2019.8798053","DOIUrl":"https://doi.org/10.1109/VR.2019.8798053","url":null,"abstract":"We present an Augmented Reality (AR) visualization and interaction tool for users to control Internet of Things (IoT) devices with hand gestures. Today, smart IoT devices are becoming increasingly ubiquitous with diverse forms and functions, yet most user controls over them are still limited to mobile devices and web interfaces. Recently, AR has been developed rapidly, and provided immersive solutions to enhance user experience of applications in many fields. Its capability to create immersive interactions allows AR to improve the way smart devices are controlled via more direct visual feedback. In this paper, we create a functional prototype of one such system, enabling seamless interactions with sound and lighting systems through the use of augmented hand-controlled interaction panels. To interpret users' intentions, we implement a standard 2D convolution neural network (CNN) for real-time hand gesture recognition and deploy it within our system. Our prototype is also equipped with a simple but effective object detector which can identify target devices within a proper range by analyzing geometric features. We evaluate the performance of our system qualitatively and quantitatively and demonstrate it on two smart devices.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121360382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}