Fish Tank Virtual Reality (FTVR) displays create a compelling 3D effect with the motion parallax cue using the head-coupled perspective. While the head-coupled viewpoint control provides natural visuomotor coupling, the motion parallax cue has been found to be underutilized with minimal head motion detected when manual input becomes available to users. We investigate whether users can effectively use head-coupling in conjunction with manual input in a mental rotation task involving inspection and comparison of a pair of 3D cubes. We found that participants managed to incorporate the head-coupled viewpoint control with the manual touch input in the task. They used the touch input as the primary input and the head as the secondary input with the input ratio of 4.2:1. The combined input approach appears to be sequential with only 8.63% duration when the head and manual input are co-activated. The result of this study provides insights for designing head-coupled interactions in many 3D interactive applications.
{"title":"Match the Cube: Investigation of the Head-coupled Input with a Spherical Fish Tank Virtual Reality Display","authors":"Qian Zhou, Fan Wu, I. Stavness, S. Fels","doi":"10.1109/VR.2019.8798362","DOIUrl":"https://doi.org/10.1109/VR.2019.8798362","url":null,"abstract":"Fish Tank Virtual Reality (FTVR) displays create a compelling 3D effect with the motion parallax cue using the head-coupled perspective. While the head-coupled viewpoint control provides natural visuomotor coupling, the motion parallax cue has been found to be underutilized with minimal head motion detected when manual input becomes available to users. We investigate whether users can effectively use head-coupling in conjunction with manual input in a mental rotation task involving inspection and comparison of a pair of 3D cubes. We found that participants managed to incorporate the head-coupled viewpoint control with the manual touch input in the task. They used the touch input as the primary input and the head as the secondary input with the input ratio of 4.2:1. The combined input approach appears to be sequential with only 8.63% duration when the head and manual input are co-activated. The result of this study provides insights for designing head-coupled interactions in many 3D interactive applications.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126259961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Performing motor tasks in virtual environments is best achieved with motion capture and animation of a 3D character that participants control in real time and perceive as being their avatar in the virtual environment. A strong Sense of Embodiment (SoE) for the virtual body not only relies on the feeling that the virtual body is their own (body ownership), but also that the virtual body moves in the world according to their will and replicates precisely their body movement (sense of agency). Within that frame of mind our specific aim is to demonstrate that the avatar can even be programmed to be better at executing a given task or to perform a movement that is normally difficult or impossible to execute precisely by the user. More specifically, our experimental task consists in asking subjects to follow with the hand a target that is animated using non-biological motion; the unnatural nature of the movement leads to systematic errors by the subjects. The challenge here is to introduce a subtle distortion between the position of the real hand and the position of the virtual hand, so that the virtual hand succeeds in performing the task while still letting subjects believe they are fully in control. Results of two experiments $(mathrm{N}=16)$ show that our implementation of a distortion function, that we name the attraction well, successfully led participants to report being in control of the movement (agency) and being embodied in the avatar (body ownership) even when the distortion was above a threshold that they can detect. Furthermore, a progressive introduction of the distortion (starting without help, and introducing distortion on the go) could even further increase its acceptance.
{"title":"Reconciling Being in-Control vs. Being Helped for the Execution of Complex Movements in VR","authors":"Thibault Porssut, B. Herbelin, R. Boulic","doi":"10.1109/VR.2019.8797716","DOIUrl":"https://doi.org/10.1109/VR.2019.8797716","url":null,"abstract":"Performing motor tasks in virtual environments is best achieved with motion capture and animation of a 3D character that participants control in real time and perceive as being their avatar in the virtual environment. A strong Sense of Embodiment (SoE) for the virtual body not only relies on the feeling that the virtual body is their own (body ownership), but also that the virtual body moves in the world according to their will and replicates precisely their body movement (sense of agency). Within that frame of mind our specific aim is to demonstrate that the avatar can even be programmed to be better at executing a given task or to perform a movement that is normally difficult or impossible to execute precisely by the user. More specifically, our experimental task consists in asking subjects to follow with the hand a target that is animated using non-biological motion; the unnatural nature of the movement leads to systematic errors by the subjects. The challenge here is to introduce a subtle distortion between the position of the real hand and the position of the virtual hand, so that the virtual hand succeeds in performing the task while still letting subjects believe they are fully in control. Results of two experiments $(mathrm{N}=16)$ show that our implementation of a distortion function, that we name the attraction well, successfully led participants to report being in control of the movement (agency) and being embodied in the avatar (body ownership) even when the distortion was above a threshold that they can detect. Furthermore, a progressive introduction of the distortion (starting without help, and introducing distortion on the go) could even further increase its acceptance.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125756388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present Haptic Prop, a semi-passive, pico-powered, tangible prop, which is able to provide programmable friction for interaction with a tabletop setup, such as interactive workbenches or fish-tank VR. We explore the interaction space, its basic components, and constraints. Haptic Prop can be used to provide haptic feedback to the user at different levels and in different directions. We have conducted a preliminary user study evaluating the users' acceptance for the device and their ability to detect the programmed level of friction for rotation and linear movements. While currently still preliminary, the results demonstrate the utility of our device and outline some promising directions for future work.
{"title":"Haptic Prop: A Tangible Prop for Semi-passive Haptic Interaction","authors":"Dimitar Valkov, Andreas Mantler, L. Linsen","doi":"10.1109/VR.2019.8797718","DOIUrl":"https://doi.org/10.1109/VR.2019.8797718","url":null,"abstract":"In this paper, we present Haptic Prop, a semi-passive, pico-powered, tangible prop, which is able to provide programmable friction for interaction with a tabletop setup, such as interactive workbenches or fish-tank VR. We explore the interaction space, its basic components, and constraints. Haptic Prop can be used to provide haptic feedback to the user at different levels and in different directions. We have conducted a preliminary user study evaluating the users' acceptance for the device and their ability to detect the programmed level of friction for rotation and linear movements. While currently still preliminary, the results demonstrate the utility of our device and outline some promising directions for future work.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122252450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Raehyuk Jung, Aiden Seung Joon Lee, Amirsaman Ashtari, J. Bazin
Spherical VR cameras can capture high-quality immersive VR images with a 360° field of view. However, in practice, when the camera orientation is not straight, the acquired VR image appears tilted when displayed on a VR headset, which diminishes the quality of the VR experience. To overcome this problem, we present a deep learning-based approach that can automatically estimate the orientation of a VR image and return its upright version. In contrast to existing methods, our approach does not require the presence of lines or horizon in the image, and thus can be applied on a wide range of scenes. Extensive experiments and comparisons with state-of-the-art methods have successfully confirmed the validity of our approach.
{"title":"Deep360Up: A Deep Learning-Based Approach for Automatic VR Image Upright Adjustment","authors":"Raehyuk Jung, Aiden Seung Joon Lee, Amirsaman Ashtari, J. Bazin","doi":"10.1109/VR.2019.8798326","DOIUrl":"https://doi.org/10.1109/VR.2019.8798326","url":null,"abstract":"Spherical VR cameras can capture high-quality immersive VR images with a 360° field of view. However, in practice, when the camera orientation is not straight, the acquired VR image appears tilted when displayed on a VR headset, which diminishes the quality of the VR experience. To overcome this problem, we present a deep learning-based approach that can automatically estimate the orientation of a VR image and return its upright version. In contrast to existing methods, our approach does not require the presence of lines or horizon in the image, and thus can be applied on a wide range of scenes. Extensive experiments and comparisons with state-of-the-art methods have successfully confirmed the validity of our approach.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128188044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xingyu Xia, Chi-Man Pun, Di Zhang, Yang Yang, Huimin Lu, Hao Gao, Feng Xu
Recently, a new form of telexistence is achieved by recording images with cameras on an unmanned aerial vehicle (UAV) and displaying them to the user via a head mounted display (HMD). A key problem here is how to provide a free and natural mechanism for the user to control the viewpoint and watch a scene. To this end, we propose an improved rate-control method with an adaptive origin update (AOU) scheme. Without the aid of any auxiliary equipment, our scheme handles the self-centering problem. In addition, we present a full 6-DOF viewpoint control method to manipulate the motion of a stereo camera, and we build a real prototype to realize this by utilizing a pan-tilt-zoom (PTZ) which not only provides 2-DOF to the camera but also compensates the jittering motion of the UAV to record more stable image streams.
{"title":"A 6-DOF Telexistence Drone Controlled by a Head Mounted Display","authors":"Xingyu Xia, Chi-Man Pun, Di Zhang, Yang Yang, Huimin Lu, Hao Gao, Feng Xu","doi":"10.1109/VR.2019.8797791","DOIUrl":"https://doi.org/10.1109/VR.2019.8797791","url":null,"abstract":"Recently, a new form of telexistence is achieved by recording images with cameras on an unmanned aerial vehicle (UAV) and displaying them to the user via a head mounted display (HMD). A key problem here is how to provide a free and natural mechanism for the user to control the viewpoint and watch a scene. To this end, we propose an improved rate-control method with an adaptive origin update (AOU) scheme. Without the aid of any auxiliary equipment, our scheme handles the self-centering problem. In addition, we present a full 6-DOF viewpoint control method to manipulate the motion of a stereo camera, and we build a real prototype to realize this by utilizing a pan-tilt-zoom (PTZ) which not only provides 2-DOF to the camera but also compensates the jittering motion of the UAV to record more stable image streams.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121734983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rory M. S. Clifford, Sungchul Jung, Simon Hoerrnann, M. Billinghurst, R. Lindeman
The decisions made by an Air Attack Supervisor (AAS) helicopter co-pilots in aerial firefighting have critical and immediate impacts. It is difficult to always make fast, high quality decisions due to the mental and physical stress being experienced. Real world training exercises have limitations such as safety, cost, time and difficulty in reproducing events, making frequent training infeasible. Virtual Reality (VR) offers new training opportunities, but it is challenging to create a virtual environment with the analogous level of stress experienced in the real-world. In this paper, we investigate the use of a multi-user, collaborative, multi-sensory (vision, audio, tactile) VR system to produce a realistic training environment for practising aerial firefighting training scenarios. We focus on a comparison between our VR training system, an equivalent real-world field training and an existing radio-only exercise currently in use, where we compare Heart-Rate Variability (HRV) and self reported stress using the Short Stress State Questionnaire (SSSQ). We conducted the study with real trainee AAS firefighters to determine the effectiveness of the system. Our results show that there were no significant differences between the VR training exercise and the real-world exercise in terms of the level of stress, measured by HRV, and no significant difference between VR and radio-only exercises, as reported by the SSSQ.
{"title":"Creating a Stressful Decision Making Environment for Aerial Firefighter Training in Virtual Reality","authors":"Rory M. S. Clifford, Sungchul Jung, Simon Hoerrnann, M. Billinghurst, R. Lindeman","doi":"10.1109/VR.2019.8797889","DOIUrl":"https://doi.org/10.1109/VR.2019.8797889","url":null,"abstract":"The decisions made by an Air Attack Supervisor (AAS) helicopter co-pilots in aerial firefighting have critical and immediate impacts. It is difficult to always make fast, high quality decisions due to the mental and physical stress being experienced. Real world training exercises have limitations such as safety, cost, time and difficulty in reproducing events, making frequent training infeasible. Virtual Reality (VR) offers new training opportunities, but it is challenging to create a virtual environment with the analogous level of stress experienced in the real-world. In this paper, we investigate the use of a multi-user, collaborative, multi-sensory (vision, audio, tactile) VR system to produce a realistic training environment for practising aerial firefighting training scenarios. We focus on a comparison between our VR training system, an equivalent real-world field training and an existing radio-only exercise currently in use, where we compare Heart-Rate Variability (HRV) and self reported stress using the Short Stress State Questionnaire (SSSQ). We conducted the study with real trainee AAS firefighters to determine the effectiveness of the system. Our results show that there were no significant differences between the VR training exercise and the real-world exercise in terms of the level of stress, measured by HRV, and no significant difference between VR and radio-only exercises, as reported by the SSSQ.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115799645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Prasolova-Førland, Mikhail Fominykh, Oscar Ihlen Ekelund
This paper presents the results of the Virtual Internship project that aims to help young job seekers get insights of different workplaces via immersive and interactive experiences. We designed a concept of ‘Immersive Job Taste’ that provides a rich presentation of occupations with elements of workplace training, targeting a specific group of young job seekers, including high-school students and unemployed. We developed several scenarios and applied different virtual and augmented reality concepts to build prototypes for different types of devices. The intermediary and the final versions of the prototypes were evaluated by several groups of primary users and experts, including over 70 young job seekers and high school students and over 45 various professionals and experts. The data were collected using questionnaires and interviews. The results indicate a generally very positive attitude towards the concept of immersive job taste, although with significant differences between job seekers and experts. The prototype developed for room-scale virtual reality with controllers was generally evaluated better than those including cardboard with 360 videos or with animated 3D graphics and augmented reality glasses. In the paper, we discuss several aspects, such as the potential of immersive technologies for career guidance, fighting youth unemployment by better informing the young job seekers, and various practical and technology considerations.
{"title":"Empowering Young Job Seekers with Virtual Reality","authors":"E. Prasolova-Førland, Mikhail Fominykh, Oscar Ihlen Ekelund","doi":"10.1109/VR.2019.8798179","DOIUrl":"https://doi.org/10.1109/VR.2019.8798179","url":null,"abstract":"This paper presents the results of the Virtual Internship project that aims to help young job seekers get insights of different workplaces via immersive and interactive experiences. We designed a concept of ‘Immersive Job Taste’ that provides a rich presentation of occupations with elements of workplace training, targeting a specific group of young job seekers, including high-school students and unemployed. We developed several scenarios and applied different virtual and augmented reality concepts to build prototypes for different types of devices. The intermediary and the final versions of the prototypes were evaluated by several groups of primary users and experts, including over 70 young job seekers and high school students and over 45 various professionals and experts. The data were collected using questionnaires and interviews. The results indicate a generally very positive attitude towards the concept of immersive job taste, although with significant differences between job seekers and experts. The prototype developed for room-scale virtual reality with controllers was generally evaluated better than those including cardboard with 360 videos or with animated 3D graphics and augmented reality glasses. In the paper, we discuss several aspects, such as the potential of immersive technologies for career guidance, fighting youth unemployment by better informing the young job seekers, and various practical and technology considerations.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134327729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eye-contact is a key aspect of non-verbal human communication in everyday tasks [1]. It provides important social and emotional information that can increase the effectiveness of human communication [8]. In a conversation, eye-contact, or the lack thereof, is constantly evaluated by human brains. Conversation partners derive subjective judgment of others' credibility, focus and confidence [4] from it. Seeking another person's eye-contact is a signal for them that focus is put on that person and the main receptive senses are prepared to receive input from the other person. Likewise breaking eye-contact usually indicates distraction, loss of confidence, loss of interest or shifting focus to a different target.
{"title":"Investigating Spherical Fish Tank Virtual Reality Displays for Establishing Realistic Eye-Contact","authors":"Georg Hagemann, Qian Zhou, I. Stavness, S. Fels","doi":"10.1109/VR.2019.8797905","DOIUrl":"https://doi.org/10.1109/VR.2019.8797905","url":null,"abstract":"Eye-contact is a key aspect of non-verbal human communication in everyday tasks [1]. It provides important social and emotional information that can increase the effectiveness of human communication [8]. In a conversation, eye-contact, or the lack thereof, is constantly evaluated by human brains. Conversation partners derive subjective judgment of others' credibility, focus and confidence [4] from it. Seeking another person's eye-contact is a signal for them that focus is put on that person and the main receptive senses are prepared to receive input from the other person. Likewise breaking eye-contact usually indicates distraction, loss of confidence, loss of interest or shifting focus to a different target.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133448257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sensory conflict theory and postural instability theory were often tested individually to explain cybersickness in VR systems, but they were seldom systematically compared. An earlier study evaluated them on a large screen using 2D videos. This study evaluated sensory conflict and postural instability on the discomfort in VR. Virtual visual locomotion were shown on an head-mounted display. A motion platform vibrated in low-frequency while the participant stood on top. Each factor was manipulated alone or in combination. Results showed that the visual motion only condition led to the highest miserable score, higher than the physical vibration only condition. This suggested that consistent with previous literature, sensory conflict may be a major contributing factor of cybersickness.
{"title":"Effect of Sensory Conflict and Postural Instability on Cybersickness","authors":"A. K. T. Ng, L. Chan, H. Lau","doi":"10.1109/VR.2019.8797781","DOIUrl":"https://doi.org/10.1109/VR.2019.8797781","url":null,"abstract":"Sensory conflict theory and postural instability theory were often tested individually to explain cybersickness in VR systems, but they were seldom systematically compared. An earlier study evaluated them on a large screen using 2D videos. This study evaluated sensory conflict and postural instability on the discomfort in VR. Virtual visual locomotion were shown on an head-mounted display. A motion platform vibrated in low-frequency while the participant stood on top. Each factor was manipulated alone or in combination. Results showed that the visual motion only condition led to the highest miserable score, higher than the physical vibration only condition. This suggested that consistent with previous literature, sensory conflict may be a major contributing factor of cybersickness.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128958738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Real-time rendering of large point clouds requires acceleration structures that reduce the number of points drawn on screen. State-of-the art algorithms group and render points in hierarchically organized chunks with varying extent and density, which results in sudden changes of density from one level of detail to another, as well as noticeable popping artifacts when additional chunks are blended in or out. These popping artifacts are especially noticeable at lower levels of detail, and consequently in virtual reality, where high performance requirements impose a reduction in detail. We propose a continuous level-of-detail method that exhibits gradual rather than sudden changes in density. Our method continuously recreates a down-sampled vertex buffer from the full point cloud, based on camera orientation, position, and distance to the camera, in a point-wise rather than chunk-wise fashion and at speeds up to 17 million points per millisecond. As a result, additional details are blended in or out in a less noticeable and significantly less irritating manner as compared to the state of the art. The improved acceptance of our method was successfully evaluated in a user study.
{"title":"Real-Time Continuous Level of Detail Rendering of Point Clouds","authors":"Markus Schütz, Katharina Krösl, M. Wimmer","doi":"10.1109/VR.2019.8798284","DOIUrl":"https://doi.org/10.1109/VR.2019.8798284","url":null,"abstract":"Real-time rendering of large point clouds requires acceleration structures that reduce the number of points drawn on screen. State-of-the art algorithms group and render points in hierarchically organized chunks with varying extent and density, which results in sudden changes of density from one level of detail to another, as well as noticeable popping artifacts when additional chunks are blended in or out. These popping artifacts are especially noticeable at lower levels of detail, and consequently in virtual reality, where high performance requirements impose a reduction in detail. We propose a continuous level-of-detail method that exhibits gradual rather than sudden changes in density. Our method continuously recreates a down-sampled vertex buffer from the full point cloud, based on camera orientation, position, and distance to the camera, in a point-wise rather than chunk-wise fashion and at speeds up to 17 million points per millisecond. As a result, additional details are blended in or out in a less noticeable and significantly less irritating manner as compared to the state of the art. The improved acceptance of our method was successfully evaluated in a user study.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132948530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}