Pub Date : 2022-02-01DOI: 10.1016/j.vrih.2022.01.005
Fanfan Wu, Feihu Yan, Weimin Shi, Zhong Zhou
Background
In this study, we propose a novel 3D scene graph prediction approach for scene understanding from point clouds.
Methods
It can automatically organize the entities of a scene in a graph, where objects are nodes and their relationships are modeled as edges. More specifically, we employ the DGCNN to capture the features of objects and their relationships in the scene. A Graph Attention Network (GAT) is introduced to exploit latent features obtained from the initial estimation to further refine the object arrangement in the graph structure. A one loss function modified from cross entropy with a variable weight is proposed to solve the multi-category problem in the prediction of object and predicate.
Results
Experiments reveal that the proposed approach performs favorably against the state-of-the-art methods in terms of predicate classification and relationship prediction and achieves comparable performance on object classification prediction.
Conclusions
The 3D scene graph prediction approach can form an abstract description of the scene space from point clouds.
{"title":"3D scene graph prediction from point clouds","authors":"Fanfan Wu, Feihu Yan, Weimin Shi, Zhong Zhou","doi":"10.1016/j.vrih.2022.01.005","DOIUrl":"10.1016/j.vrih.2022.01.005","url":null,"abstract":"<div><h3>Background</h3><p>In this study, we propose a novel 3D scene graph prediction approach for scene understanding from point clouds.</p></div><div><h3>Methods</h3><p>It can automatically organize the entities of a scene in a graph, where objects are nodes and their relationships are modeled as edges. More specifically, we employ the DGCNN to capture the features of objects and their relationships in the scene. A Graph Attention Network (GAT) is introduced to exploit latent features obtained from the initial estimation to further refine the object arrangement in the graph structure. A one loss function modified from cross entropy with a variable weight is proposed to solve the multi-category problem in the prediction of object and predicate.</p></div><div><h3>Results</h3><p>Experiments reveal that the proposed approach performs favorably against the state-of-the-art methods in terms of predicate classification and relationship prediction and achieves comparable performance on object classification prediction.</p></div><div><h3>Conclusions</h3><p>The 3D scene graph prediction approach can form an abstract description of the scene space from point clouds.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000055/pdf?md5=00217b3d733a606f1856c11825c17ba8&pid=1-s2.0-S2096579622000055-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116543976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-01DOI: 10.1016/j.vrih.2022.01.001
Yuheng Feng , Qingze Liu , Aiping Liu , Ruobing Qian , Xun Chen
Background
Electroencephalography (EEG) has gained popularity in various types of biomedical applications as a signal source that can be easily acquired and conveniently analyzed. However, owing to a complex scalp electrical environment, EEG is often polluted by diverse artifacts, with electromyography artifacts being the most difficult to remove. In particular, for ambulatory EEG devices with a restricted number of channels, dealing with muscle artifacts is a challenge.
Methods
In this study, we propose a simple but effective novel scheme that combines singular spectrum analysis (SSA) and canonical correlation analysis (CCA) algorithms for single-channel problems and then extend it to a fewchannel case by adding additional combining and dividing operations to channels.
Results
We evaluated our proposed framework on both semi-simulated and real-life data and compared it with some state-of-theart methods. The results demonstrate this novel framework's superior performance in both single-channel and few-channel cases.
Conclusions
This promising approach, based on its effectiveness and low time cost, is suitable for real-world biomedical signal processing applications.
{"title":"A Novel SSA-CCA Framework forMuscle Artifact Removal from Ambulatory EEG","authors":"Yuheng Feng , Qingze Liu , Aiping Liu , Ruobing Qian , Xun Chen","doi":"10.1016/j.vrih.2022.01.001","DOIUrl":"10.1016/j.vrih.2022.01.001","url":null,"abstract":"<div><h3>Background</h3><p>Electroencephalography (EEG) has gained popularity in various types of biomedical applications as a signal source that can be easily acquired and conveniently analyzed. However, owing to a complex scalp electrical environment, EEG is often polluted by diverse artifacts, with electromyography artifacts being the most difficult to remove. In particular, for ambulatory EEG devices with a restricted number of channels, dealing with muscle artifacts is a challenge.</p></div><div><h3>Methods</h3><p>In this study, we propose a simple but effective novel scheme that combines singular spectrum analysis (SSA) and canonical correlation analysis (CCA) algorithms for single-channel problems and then extend it to a fewchannel case by adding additional combining and dividing operations to channels.</p></div><div><h3>Results</h3><p>We evaluated our proposed framework on both semi-simulated and real-life data and compared it with some state-of-theart methods. The results demonstrate this novel framework's superior performance in both single-channel and few-channel cases.</p></div><div><h3>Conclusions</h3><p>This promising approach, based on its effectiveness and low time cost, is suitable for real-world biomedical signal processing applications.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000018/pdf?md5=7a3850ea23d366350ac6e69f30e69d43&pid=1-s2.0-S2096579622000018-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116375593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-01DOI: 10.1016/j.vrih.2022.01.003
Changlin Han, Ming Xu, Jingsheng Tang, Yadong Liu, Zongtan Zhou
Background
Compared with traditional biomagnetic field detection devices, such as superconducting quantum interference devices (SQUIDs) and atomic magnetometers, only giant magnetoimpedance (GMI) sensors can be applied for unshielded human brain biomagnetic detection, and they have the potential for application in next-generation wearable equipment for brain-computer interfaces (BCIs). Achieving a better GMI sensor without magnetic shielding requires the stimulation of the GMI effect to be maximized and environmental noise interference to be minimized. Moreover, the GMI effect stimulated in an amorphous filament is closely related to its working point, which is sensitive to both the external magnetic field and the drive current of the filament.
Methods
In this paper, we propose a new noisereducing GMI gradiometer with a dual-loop self-adapting structure. Noise reduction is realized by a direction-flexible differential probe, and the dual-loop structure optimizes and stabilizes the working point by automatically controlling the external magnetic field and drive current. This dual-loop structure is fully program controlled by a micro control unit (MCU), which not only simplifies the traditional constantparameter sensor circuit, saving the time required to adjust the circuit component parameters, but also improves the sensor performance and environmental adaptation.
Results
In the performance test, within 2 min of self-adaptation, our sensor showed a better sensitivity and signal-to-noise ratio (SNR) than those of the traditional designs and achieved a background noise of 12 pT/√Hz at 10 Hz and 7pT/√Hz at 200 Hz.
Conclusion
To the best of our knowledge, our sensor is the first to realize self-adaptation of both the external magnetic field and the drive current.
{"title":"Giant magneto-impedance sensor with working point selfadaptation for unshielded human bio-magnetic detection","authors":"Changlin Han, Ming Xu, Jingsheng Tang, Yadong Liu, Zongtan Zhou","doi":"10.1016/j.vrih.2022.01.003","DOIUrl":"10.1016/j.vrih.2022.01.003","url":null,"abstract":"<div><h3>Background</h3><p>Compared with traditional biomagnetic field detection devices, such as superconducting quantum interference devices (SQUIDs) and atomic magnetometers, only giant magnetoimpedance (GMI) sensors can be applied for unshielded human brain biomagnetic detection, and they have the potential for application in next-generation wearable equipment for brain-computer interfaces (BCIs). Achieving a better GMI sensor without magnetic shielding requires the stimulation of the GMI effect to be maximized and environmental noise interference to be minimized. Moreover, the GMI effect stimulated in an amorphous filament is closely related to its working point, which is sensitive to both the external magnetic field and the drive current of the filament.</p></div><div><h3>Methods</h3><p>In this paper, we propose a new noisereducing GMI gradiometer with a dual-loop self-adapting structure. Noise reduction is realized by a direction-flexible differential probe, and the dual-loop structure optimizes and stabilizes the working point by automatically controlling the external magnetic field and drive current. This dual-loop structure is fully program controlled by a micro control unit (MCU), which not only simplifies the traditional constantparameter sensor circuit, saving the time required to adjust the circuit component parameters, but also improves the sensor performance and environmental adaptation.</p></div><div><h3>Results</h3><p>In the performance test, within 2 min of self-adaptation, our sensor showed a better sensitivity and signal-to-noise ratio (SNR) than those of the traditional designs and achieved a background noise of 12 pT/√Hz at 10 Hz and 7pT/√Hz at 200 Hz.</p></div><div><h3>Conclusion</h3><p>To the best of our knowledge, our sensor is the first to realize self-adaptation of both the external magnetic field and the drive current.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000031/pdf?md5=0c928ff20b3f4ee53d0d5842fa3801f4&pid=1-s2.0-S2096579622000031-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"119271510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social distancing is an effective way to reduce the spread of the SARS-CoV-2 virus. Many students and researchers have already attempted to use computer vision technology to automatically detect human beings in the field of view of a camera and help enforce social distancing. However, because of the present lockdown measures in several countries, the validation of computer vision systems using large-scale datasets is a challenge.
Methods
In this paper, a new method is proposed for generating customized datasets and validating deep-learning-based computer vision models using virtual reality (VR) technology. Using VR, we modeled a digital twin (DT) of an existing office space and used it to create a dataset of individuals in different postures, dresses, and locations. To test the proposed solution, we implemented a convolutional neural network (CNN) model for detecting people in a limited-sized dataset of real humans and a simulated dataset of humanoid figures.
Results
We detected the number of persons in both the real and synthetic datasets with more than 90% accuracy, and the actual and measured distances were significantly correlated (r=0.99). Finally, we used intermittent-layer- and heatmap-based data visualization techniques to explain the failure modes of a CNN.
Conclusions
A new application of DTs is proposed to enhance workplace safety by measuring the social distance between individuals. The use of our proposed pipeline along with a DT of the shared space for visualizing both environmental and human behavior aspects preserves the privacy of individuals and improves the latency of such monitoring systems because only the extracted information is streamed.
{"title":"Virtual-reality-based digital twin of office spaces with social distance measurement feature","authors":"Abhishek Mukhopadhyay , G S Rajshekar Reddy , KamalPreet Singh Saluja , Subhankar Ghosh , Anasol Peña-Rios , Gokul Gopal , Pradipta Biswas","doi":"10.1016/j.vrih.2022.01.004","DOIUrl":"10.1016/j.vrih.2022.01.004","url":null,"abstract":"<div><h3>Background</h3><p>Social distancing is an effective way to reduce the spread of the SARS-CoV-2 virus. Many students and researchers have already attempted to use computer vision technology to automatically detect human beings in the field of view of a camera and help enforce social distancing. However, because of the present lockdown measures in several countries, the validation of computer vision systems using large-scale datasets is a challenge.</p></div><div><h3>Methods</h3><p>In this paper, a new method is proposed for generating customized datasets and validating deep-learning-based computer vision models using virtual reality (VR) technology. Using VR, we modeled a digital twin (DT) of an existing office space and used it to create a dataset of individuals in different postures, dresses, and locations. To test the proposed solution, we implemented a convolutional neural network (CNN) model for detecting people in a limited-sized dataset of real humans and a simulated dataset of humanoid figures.</p></div><div><h3>Results</h3><p>We detected the number of persons in both the real and synthetic datasets with more than 90% accuracy, and the actual and measured distances were significantly correlated (<em>r</em>=0.99). Finally, we used intermittent-layer- and heatmap-based data visualization techniques to explain the failure modes of a CNN.</p></div><div><h3>Conclusions</h3><p>A new application of DTs is proposed to enhance workplace safety by measuring the social distance between individuals. The use of our proposed pipeline along with a DT of the shared space for visualizing both environmental and human behavior aspects preserves the privacy of individuals and improves the latency of such monitoring systems because only the extracted information is streamed.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000043/pdf?md5=be8ce798f1ed4b70c3af4fe5572566c1&pid=1-s2.0-S2096579622000043-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45589549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1016/j.vrih.2021.10.001
Zixiang Zhao , Quanwei Zhou , Xiaoguang Han , Lili Wang
Background
When a user walks freely in an unknown virtual scene and searches for multiple dynamic targets, the lack of a comprehensive understanding of the environment may have a negative impact on the execution of virtual reality tasks. Previous studies can help users with auxiliary tools, such as top view maps or trails, and exploration guidance, for example, automatically generated paths according to the user location and important static spots in virtual scenes. However, in some virtual reality applications, when the scene has complex occlusions, and the user cannot obtain any real-time position information of the dynamic target, the above assistance cannot help the user complete the task more effectively.
Methods
We design a virtual camera priority-based assistance to help the user search dynamic targets efficiently. Instead of forcing users to go to destinations, we provide an optimized instant path to guide them to places where they are more likely to find dynamic targets when they ask for help. We assume that a certain number of virtual cameras are fixed in virtual scenes to obtain extra depth maps, which capture the depth information of the scene and the locations of the dynamic targets. Our methodautomatically analyzes the priority of these virtual cameras, chooses the destination, and generates an instant path to assist the user in finding the dynamic targets. Our method is suitable for various virtual reality applications that do not require manual supervision or input.
Results
A user study is designed to evaluate the proposed method. The results indicate that compared with three conventional navigation methods, such as the top-view method, our method can help users find dynamic targets more efficiently. The advantages include reducing the task completion time, reducing the number of resets, increasing the average distance between resets, and reducing user task load.
Conclusions
We presented a method for improving dynamic target searching efficiency in virtual scenes by virtual camera priority-based path guidance. Compared with three conventional navigation methods, such as the top-view method, this method can help users find dynamic targets more effectively.
{"title":"Dynamic targets searching assistance based on virtual camera priority","authors":"Zixiang Zhao , Quanwei Zhou , Xiaoguang Han , Lili Wang","doi":"10.1016/j.vrih.2021.10.001","DOIUrl":"10.1016/j.vrih.2021.10.001","url":null,"abstract":"<div><h3>Background</h3><p>When a user walks freely in an unknown virtual scene and searches for multiple dynamic targets, the lack of a comprehensive understanding of the environment may have a negative impact on the execution of virtual reality tasks. Previous studies can help users with auxiliary tools, such as top view maps or trails, and exploration guidance, for example, automatically generated paths according to the user location and important static spots in virtual scenes. However, in some virtual reality applications, when the scene has complex occlusions, and the user cannot obtain any real-time position information of the dynamic target, the above assistance cannot help the user complete the task more effectively.</p></div><div><h3>Methods</h3><p>We design a virtual camera priority-based assistance to help the user search dynamic targets efficiently. Instead of forcing users to go to destinations, we provide an optimized instant path to guide them to places where they are more likely to find dynamic targets when they ask for help. We assume that a certain number of virtual cameras are fixed in virtual scenes to obtain extra depth maps, which capture the depth information of the scene and the locations of the dynamic targets. Our methodautomatically analyzes the priority of these virtual cameras, chooses the destination, and generates an instant path to assist the user in finding the dynamic targets. Our method is suitable for various virtual reality applications that do not require manual supervision or input.</p></div><div><h3>Results</h3><p>A user study is designed to evaluate the proposed method. The results indicate that compared with three conventional navigation methods, such as the top-view method, our method can help users find dynamic targets more efficiently. The advantages include reducing the task completion time, reducing the number of resets, increasing the average distance between resets, and reducing user task load.</p></div><div><h3>Conclusions</h3><p>We presented a method for improving dynamic target searching efficiency in virtual scenes by virtual camera priority-based path guidance. Compared with three conventional navigation methods, such as the top-view method, this method can help users find dynamic targets more effectively.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579621000930/pdf?md5=00b21315570c32cc5c754a9e7d0ef4fe&pid=1-s2.0-S2096579621000930-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121549539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1016/j.vrih.2021.06.004
Xiaolong Liu, Lili Wang
Background
The redirected jumping (RDJ) technique is a new locomotion method that saves physical tracking area and enhances the body movement experience of users in virtual reality. In a previous study, the range of imperceptible manipulation gains in RDJ was discussed in an empty virtual environment (VE).
Methods
In this study, we conducted three tasks to investigate the influence of alley width on the detection threshold of jump redirection in a VE.
Results
The results demonstrated that the imperceptible distance gain range in RDJ was not associated with the width of the alleys. The imperceptible height and rotation gain ranges in RDJ are related to the width of the alleys.
Conclusions
We preliminarily summarized the relationship between the occlusion distance and manipulation range of the three gains in a complex environment. Simultaneously, the guiding principle for choosing three gains in RDJ according to the occlusion distance in a complex environment is provided.
{"title":"Redirected jumping in virtual scenes with alleys","authors":"Xiaolong Liu, Lili Wang","doi":"10.1016/j.vrih.2021.06.004","DOIUrl":"10.1016/j.vrih.2021.06.004","url":null,"abstract":"<div><h3>Background</h3><p>The redirected jumping (RDJ) technique is a new locomotion method that saves physical tracking area and enhances the body movement experience of users in virtual reality. In a previous study, the range of imperceptible manipulation gains in RDJ was discussed in an empty virtual environment (VE).</p></div><div><h3>Methods</h3><p>In this study, we conducted three tasks to investigate the influence of alley width on the detection threshold of jump redirection in a VE.</p></div><div><h3>Results</h3><p>The results demonstrated that the imperceptible distance gain range in RDJ was not associated with the width of the alleys. The imperceptible height and rotation gain ranges in RDJ are related to the width of the alleys.</p></div><div><h3>Conclusions</h3><p>We preliminarily summarized the relationship between the occlusion distance and manipulation range of the three gains in a complex environment. Simultaneously, the guiding principle for choosing three gains in RDJ according to the occlusion distance in a complex environment is provided.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579621000905/pdf?md5=81cce5212bcaf2dc57ecce5d78048502&pid=1-s2.0-S2096579621000905-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126424484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1016/j.vrih.2021.08.008
Liming Wang, Xianwei Chen, Tianyang Dong, Jing Fan
Background
In virtual environments (VEs), users can explore a large virtual scene through the viewpoint operation of a head-mounted display (HMD) and movement gains combined with redirected walking technology. The existing redirection methods and viewpoint operations are effective in the horizontal direction; however, they cannot help participants experience immersion in the vertical direction. To improve the immersion of upslope walking, this study presents a virtual climbing system based on passive haptics.
Methods
This virtual climbing system uses the tactile feedback provided by sponges, a commonly used flexible material, to simulate the tactile sense of a user's soles. In addition, the visual stimulus of the HMD, the tactile feedback of the flexible material, and the operation of the user's walking in a VE combined with redirection technology are all adopted to enhance the user's perception in a VE. In the experiments, a physical space with a hard-flat floor and three types of sponges with thicknesses of 3, 5, and 8 cm were utilized.
Results
We recruited 40 volunteers to conduct these experiments, and the results showed that a thicker flexible material increases the difficulty for users to roam and walk within a certain range.
Conclusion
The virtual climbing system can enhance users' perception of upslope walking in a VE.
{"title":"Virtual climbing: An immersive upslope walking system using passive haptics","authors":"Liming Wang, Xianwei Chen, Tianyang Dong, Jing Fan","doi":"10.1016/j.vrih.2021.08.008","DOIUrl":"10.1016/j.vrih.2021.08.008","url":null,"abstract":"<div><h3>Background</h3><p>In virtual environments (VEs), users can explore a large virtual scene through the viewpoint operation of a head-mounted display (HMD) and movement gains combined with redirected walking technology. The existing redirection methods and viewpoint operations are effective in the horizontal direction; however, they cannot help participants experience immersion in the vertical direction. To improve the immersion of upslope walking, this study presents a virtual climbing system based on passive haptics.</p></div><div><h3>Methods</h3><p>This virtual climbing system uses the tactile feedback provided by sponges, a commonly used flexible material, to simulate the tactile sense of a user's soles. In addition, the visual stimulus of the HMD, the tactile feedback of the flexible material, and the operation of the user's walking in a VE combined with redirection technology are all adopted to enhance the user's perception in a VE. In the experiments, a physical space with a hard-flat floor and three types of sponges with thicknesses of 3, 5, and 8 cm were utilized.</p></div><div><h3>Results</h3><p>We recruited 40 volunteers to conduct these experiments, and the results showed that a thicker flexible material increases the difficulty for users to roam and walk within a certain range.</p></div><div><h3>Conclusion</h3><p>The virtual climbing system can enhance users' perception of upslope walking in a VE.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579621000929/pdf?md5=0ac60f7de11cff82063a0b50870e2cfc&pid=1-s2.0-S2096579621000929-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115337022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1016/j.vrih.2021.06.003
Yijun Li , Miao Wang , Derong Jin , Frank Steinicke , Qinping Zhao
Background
Redirected jumping (RDJ) allows users to explore virtual environments (VEs) naturally by scaling a small real-world jump to a larger virtual jump with virtual camera motion manipulation, thereby addressing the problem of limited physical space in VR applications. Previous RDJ studies have mainly focused on detection threshold estimation. However, the effect VE or selfrepresentation (SR) has on the perception or performance of RDJs remains unclear.
Methods
In this paper, we report experiments to measure the perception (detection thresholds for gains, presence, embodiment, intrinsic motivation, and cybersickness) and physical performance (heart rate intensity, preparation time, and actual jumping distance) of redirected forward jumping under six different combinations of VE (low and high visual richness) and SRs (invisible, shoes, and human-like).
Results
Our results indicated that the detection threshold ranges for horizontal translation gains were significantly smaller in the VE with high rather than low visual richness. When different SRs were applied, our results did not suggest significant differences in detection thresholds, but it did report longer actual jumping distances in the invisible body case compared with the other two SRs. In the high visual richness VE, the preparation time for jumping with a human-like avatar was significantly longer than that with other SRs. Finally, some correlations were found between perception and physical performance measures.
Conclusions
All these findings suggest that both VE and SRs influence users' perception and performance in RDJ and must be considered when designing locomotion techniques.
{"title":"Effects of virtual environment and self-representations on perception and physical performance in redirected jumping","authors":"Yijun Li , Miao Wang , Derong Jin , Frank Steinicke , Qinping Zhao","doi":"10.1016/j.vrih.2021.06.003","DOIUrl":"10.1016/j.vrih.2021.06.003","url":null,"abstract":"<div><h3>Background</h3><p>Redirected jumping (RDJ) allows users to explore virtual environments (VEs) naturally by scaling a small real-world jump to a larger virtual jump with virtual camera motion manipulation, thereby addressing the problem of limited physical space in VR applications. Previous RDJ studies have mainly focused on detection threshold estimation. However, the effect VE or selfrepresentation (SR) has on the perception or performance of RDJs remains unclear.</p></div><div><h3>Methods</h3><p>In this paper, we report experiments to measure the perception (detection thresholds for gains, presence, embodiment, intrinsic motivation, and cybersickness) and physical performance (heart rate intensity, preparation time, and actual jumping distance) of redirected forward jumping under six different combinations of VE (low and high visual richness) and SRs (invisible, shoes, and human-like).</p></div><div><h3>Results</h3><p>Our results indicated that the detection threshold ranges for horizontal translation gains were significantly smaller in the VE with high rather than low visual richness. When different SRs were applied, our results did not suggest significant differences in detection thresholds, but it did report longer actual jumping distances in the invisible body case compared with the other two SRs. In the high visual richness VE, the preparation time for jumping with a human-like avatar was significantly longer than that with other SRs. Finally, some correlations were found between perception and physical performance measures.</p></div><div><h3>Conclusions</h3><p>All these findings suggest that both VE and SRs influence users' perception and performance in RDJ and must be considered when designing locomotion techniques.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579621000899/pdf?md5=6178a08c13f72f6f91cb8554e898e08d&pid=1-s2.0-S2096579621000899-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131999273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}