Pub Date : 2025-10-01Epub Date: 2025-11-06DOI: 10.1016/j.vrih.2025.06.002
Shakif AHMED , Dhruba Jyoti SHIL , Tanvir Ahmed SOURO , Sakib Al MAHMOOD , Ferdous Irtiaz KHAN
Background
Brain tumors are challenging to diagnose and treat, and require accurate and early therapeutic intervention. Magnetic Resonance Imaging (MRI) scans can visualize the internal structure of the brain. Often, deep learning is applied to images for the early and accurate detection of tumor cells. However, these models lack accuracy and efficacy in practical applications. Hybrid or modified models can facilitate better classification and provide insights into early-stage cancer detection.
Methods
This study demonstrates a parallel architecture that uses MRI images and integrates transformer-based frameworks with Convolutional Neural Networks (CNNs) to better classify distinct types of brain tumors. The proposed architecture, SwinResDual (SwRD), combines a Residual Network (ResNet) and a Swin Transformer in parallel to extract key features from input images. Using augmented MRI scans, 31,464 scans for multiclass classification, and 30,000 scans for binary classification, the architecture simultaneously processed images through the ResNet50 and Swin Transformer branches, leveraging their strengths in hierarchical feature extraction and global context modeling to efficiently capture local and global image features. The final classifications are obtained by merging these features and passing them through a linear classifier. This approach identifies strong and varied characteristics and provides a precise brain tumor diagnosis.
Results
In the extensive evaluation, the model performed with an accuracy of 99.79% and a cross-validation accuracy of 100% for multiclass classification, along with 99.97% accuracy in binary classification.
Conclusions
In conclusion, the findings demonstrate great promise for brain tumor detection and advanced medical imaging diagnostics.
{"title":"Advancing brain tumor MRI classification using SwRD: A parallel swin transformer-ResNet approach","authors":"Shakif AHMED , Dhruba Jyoti SHIL , Tanvir Ahmed SOURO , Sakib Al MAHMOOD , Ferdous Irtiaz KHAN","doi":"10.1016/j.vrih.2025.06.002","DOIUrl":"10.1016/j.vrih.2025.06.002","url":null,"abstract":"<div><h3>Background</h3><div>Brain tumors are challenging to diagnose and treat, and require accurate and early therapeutic intervention. Magnetic Resonance Imaging (MRI) scans can visualize the internal structure of the brain. Often, deep learning is applied to images for the early and accurate detection of tumor cells. However, these models lack accuracy and efficacy in practical applications. Hybrid or modified models can facilitate better classification and provide insights into early-stage cancer detection.</div></div><div><h3>Methods</h3><div>This study demonstrates a parallel architecture that uses MRI images and integrates transformer-based frameworks with Convolutional Neural Networks (CNNs) to better classify distinct types of brain tumors. The proposed architecture, SwinResDual (SwRD), combines a Residual Network (ResNet) and a Swin Transformer in parallel to extract key features from input images. Using augmented MRI scans, 31,464 scans for multiclass classification, and 30,000 scans for binary classification, the architecture simultaneously processed images through the ResNet50 and Swin Transformer branches, leveraging their strengths in hierarchical feature extraction and global context modeling to efficiently capture local and global image features. The final classifications are obtained by merging these features and passing them through a linear classifier. This approach identifies strong and varied characteristics and provides a precise brain tumor diagnosis.</div></div><div><h3>Results</h3><div>In the extensive evaluation, the model performed with an accuracy of 99.79% and a cross-validation accuracy of 100% for multiclass classification, along with 99.97% accuracy in binary classification.</div></div><div><h3>Conclusions</h3><div>In conclusion, the findings demonstrate great promise for brain tumor detection and advanced medical imaging diagnostics.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"7 5","pages":"Pages 501-522"},"PeriodicalIF":0.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145449306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study aimed to explore the influence of prior gaming experience on the intensity and onset of cybersickness symptoms by comparing the effects of virtual reality (VR) immersion in gamers and nongamers.
Methods
This study involved 50 male participants, with equal numbers of gamers and nongamers, who were subject to a VR environment using head-mounted displays in a sitting position for a 15-minute session. The intensity of cybersickness symptoms, such as nausea, oculomotor disturbances, and disorientation, was measured using a simulator sickness questionnaire, and the onset of cybersickness was measured using the fast motion sickness scale. Physiological indices were measured based on heart rate variability (HRV) parameters.
Results
This study found that prior gaming experience significantly affected symptoms of cybersickness during VR immersion. Nongamers experienced more severe symptoms, including higher levels of nausea, disorientation, and oculomotor disturbances, with symptoms appearing earlier than those in gamers. These differences were linked to increased fluctuations in HRV and reduced parasympathetic activity in nongamers, indicating higher autonomic nervous system strain. By contrast, gamers showed more stable HRV responses, suggesting better physiological adaptability to VR environments.
Conclusion
These findings indicate that familiarity of gamers with dynamic visual and sensory inputs may help them manage VR-induced sensory conflicts more effectively.
{"title":"Effect of prior gaming experience on cybersickness symptoms in a virtual reality environment","authors":"Chalis Fajri HASIBUAN , Budi HARTONO , Titis WIJAYANTO","doi":"10.1016/j.vrih.2025.06.003","DOIUrl":"10.1016/j.vrih.2025.06.003","url":null,"abstract":"<div><h3>Background</h3><div>This study aimed to explore the influence of prior gaming experience on the intensity and onset of cybersickness symptoms by comparing the effects of virtual reality (VR) immersion in gamers and nongamers.</div></div><div><h3>Methods</h3><div>This study involved 50 male participants, with equal numbers of gamers and nongamers, who were subject to a VR environment using head-mounted displays in a sitting position for a 15-minute session. The intensity of cybersickness symptoms, such as nausea, oculomotor disturbances, and disorientation, was measured using a simulator sickness questionnaire, and the onset of cybersickness was measured using the fast motion sickness scale. Physiological indices were measured based on heart rate variability (HRV) parameters.</div></div><div><h3>Results</h3><div>This study found that prior gaming experience significantly affected symptoms of cybersickness during VR immersion. Nongamers experienced more severe symptoms, including higher levels of nausea, disorientation, and oculomotor disturbances, with symptoms appearing earlier than those in gamers. These differences were linked to increased fluctuations in HRV and reduced parasympathetic activity in nongamers, indicating higher autonomic nervous system strain. By contrast, gamers showed more stable HRV responses, suggesting better physiological adaptability to VR environments.</div></div><div><h3>Conclusion</h3><div>These findings indicate that familiarity of gamers with dynamic visual and sensory inputs may help them manage VR-induced sensory conflicts more effectively.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"7 5","pages":"Pages 468-482"},"PeriodicalIF":0.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145449299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-08-26DOI: 10.1016/j.vrih.2022.08.014
Xinrong Hu, Kaifan Yang, Ruiqi Luo, Tao Peng, Junping Liu
With the popularity of the digital human body, monocular three-dimensional (3D) face reconstruction is widely used in fields such as animation and face recognition. Although current methods trained using single-view image sets perform well in monocular 3D face reconstruction tasks, they tend to rely on the constraints of the a priori model or the appearance conditions of the input images, fundamentally because of the inability to propose an effective method to reduce the effects of two-dimensional (2D) ambiguity. To solve this problem, we developed an unsupervised training framework for monocular face 3D reconstruction using rotational cycle consistency. Specifically, to learn more accurate facial information, we first used an autoencoder to factor the input images and applied these factors to generate normalized frontal views. We then proceeded through a differentiable renderer to use rotational consistency to continuously perceive refinement. Our method provided implicit multi-view consistency constraints on the pose and depth information estimation of the input face, and the performance was accurate and robust in the presence of large variations in expression and pose. In the benchmark tests, our method performed more stably and realistically than other methods that used 3D face reconstruction in monocular 2D images.
{"title":"Learning monocular face reconstruction from in the wild images using rotation cycle consistency","authors":"Xinrong Hu, Kaifan Yang, Ruiqi Luo, Tao Peng, Junping Liu","doi":"10.1016/j.vrih.2022.08.014","DOIUrl":"10.1016/j.vrih.2022.08.014","url":null,"abstract":"<div><div>With the popularity of the digital human body, monocular three-dimensional (3D) face reconstruction is widely used in fields such as animation and face recognition. Although current methods trained using single-view image sets perform well in monocular 3D face reconstruction tasks, they tend to rely on the constraints of the a priori model or the appearance conditions of the input images, fundamentally because of the inability to propose an effective method to reduce the effects of two-dimensional (2D) ambiguity. To solve this problem, we developed an unsupervised training framework for monocular face 3D reconstruction using rotational cycle consistency. Specifically, to learn more accurate facial information, we first used an autoencoder to factor the input images and applied these factors to generate normalized frontal views. We then proceeded through a differentiable renderer to use rotational consistency to continuously perceive refinement. Our method provided implicit multi-view consistency constraints on the pose and depth information estimation of the input face, and the performance was accurate and robust in the presence of large variations in expression and pose. In the benchmark tests, our method performed more stably and realistically than other methods that used 3D face reconstruction in monocular 2D images.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"7 4","pages":"Pages 379-392"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144903888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-08-26DOI: 10.1016/j.vrih.2025.05.001
Ruicheng Gao , Yue Qi
Background
Physics-based differentiable rendering (PBDR) aims to propagate gradients from scene parameters to image pixels or vice versa. The physically correct gradients obtained can be used in various applications, including inverse rendering and machine learning. Currently, two categories of methods are prevalent in the PBDR community: reparameterization and boundary sampling methods. The state-of-the-art boundary sampling methods rely on a guiding structure to calculate the gradients efficiently. They utilize the rays generated in traditional path-tracing methods and project them onto the object silhouette boundary to initialize the guiding structure.
Methods
In this study, we propose an augmentation of previous projective-sampling-based boundary-sampling methods in a bidirectional manner. Specifically, we utilize the rays spawned from the sensors and also employ the rays emitted by the emitters to initialize the guiding structure.
Results
To demonstrate the benefits of our technique, we perform a comparative analysis of differentiable rendering and inverse rendering performance. We utilize a range of synthetic scene examples and evaluate our method against state-of-the-art projective-sampling-based differentiable rendering methods.
Conclusions
The experiments show that our method achieves lower variance gradients in the forward differentiable rendering process and better geometry reconstruction quality in the inverse-rendering results.
{"title":"Bidirectional projective sampling for physics-based differentiable rendering","authors":"Ruicheng Gao , Yue Qi","doi":"10.1016/j.vrih.2025.05.001","DOIUrl":"10.1016/j.vrih.2025.05.001","url":null,"abstract":"<div><h3>Background</h3><div>Physics-based differentiable rendering (PBDR) aims to propagate gradients from scene parameters to image pixels or vice versa. The physically correct gradients obtained can be used in various applications, including inverse rendering and machine learning. Currently, two categories of methods are prevalent in the PBDR community: reparameterization and boundary sampling methods. The state-of-the-art boundary sampling methods rely on a guiding structure to calculate the gradients efficiently. They utilize the rays generated in traditional path-tracing methods and project them onto the object silhouette boundary to initialize the guiding structure.</div></div><div><h3>Methods</h3><div>In this study, we propose an augmentation of previous projective-sampling-based boundary-sampling methods in a bidirectional manner. Specifically, we utilize the rays spawned from the sensors and also employ the rays emitted by the emitters to initialize the guiding structure.</div></div><div><h3>Results</h3><div>To demonstrate the benefits of our technique, we perform a comparative analysis of differentiable rendering and inverse rendering performance. We utilize a range of synthetic scene examples and evaluate our method against state-of-the-art projective-sampling-based differentiable rendering methods.</div></div><div><h3>Conclusions</h3><div>The experiments show that our method achieves lower variance gradients in the forward differentiable rendering process and better geometry reconstruction quality in the inverse-rendering results.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"7 4","pages":"Pages 367-378"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144903887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-08-26DOI: 10.1016/j.vrih.2023.10.006
Rui Song , Xiaoying Sun , Dangxiao Wang , Guohong Liu , Dongyan Nie
High-fidelity tactile rendering offers significant potential for improving the richness and immersion of touchscreen interactions. This study focuses on a quantitative description of tactile rendering fidelity using a custom-designed hybrid electrovibration and mechanical vibration (HEM) device. An electrovibration and mechanical vibration (EMV) algorithm that renders 3D gratings with different physical heights was proposed and shown to achieve 81% accuracy in shape recognition. Models of tactile rendering fidelity were established based on the evaluation of the height discrimination threshold, and the psychophysical-physical relationships between the discrimination and reference heights were well described by a modification of Weber’s law, with correlation coefficients higher than 0.9. The physiological-physical relationship between the pulse firing rate and the physical stimulation voltage was modeled using the Izhikevich spiking model with a logarithmic relationship.
{"title":"Psychological and physiological model of tactile rendering fidelity using combined electro and mechanical vibration","authors":"Rui Song , Xiaoying Sun , Dangxiao Wang , Guohong Liu , Dongyan Nie","doi":"10.1016/j.vrih.2023.10.006","DOIUrl":"10.1016/j.vrih.2023.10.006","url":null,"abstract":"<div><div>High-fidelity tactile rendering offers significant potential for improving the richness and immersion of touchscreen interactions. This study focuses on a quantitative description of tactile rendering fidelity using a custom-designed hybrid electrovibration and mechanical vibration (HEM) device. An electrovibration and mechanical vibration (EMV) algorithm that renders 3D gratings with different physical heights was proposed and shown to achieve 81% accuracy in shape recognition. Models of tactile rendering fidelity were established based on the evaluation of the height discrimination threshold, and the psychophysical-physical relationships between the discrimination and reference heights were well described by a modification of Weber’s law, with correlation coefficients higher than 0.9. The physiological-physical relationship between the pulse firing rate and the physical stimulation voltage was modeled using the Izhikevich spiking model with a logarithmic relationship.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"7 4","pages":"Pages 344-366"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144904024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-08-26DOI: 10.1016/j.vrih.2023.08.004
Biao Dong , Wenjun Tan , Weichao Chang , Baoting Li , Yanliang Guo , Quanxing Hu , Guangwei Liu
Background
As information technology has advanced and been popularized, open pit mining has rapidly developed toward integration and digitization. The three-dimensional reconstruction technology has been successfully applied to geological reconstruction and modeling of surface scenes in open pit mines. However, an integrated modeling method for surface and underground mine sites has not been reported.
Methods
In this study, we propose an integrated modeling method for open pit mines that fuses a real scene on the surface with an underground geological model. Based on oblique photography, a real-scene model was established on the surface. Based on the surface-stitching method proposed, the upper and lower surfaces and sides of the model were constructed in stages to construct a complete underground three-dimensional geological model, and the aboveground and underground models were registered together to build an integrated open pit mine model.
Results
The oblique photography method used reconstructed a surface model of an open pit mine using a real scene. The surface-stitching algorithm proposed was compared with the ball-pivoting and Poisson algorithms, and the integrity of the reconstructed model was markedly superior to that of the other two reconstruction methods. In addition, the surface-stitching algorithm was applied to the reconstruction of different formation models and showed good stability and reconstruction efficiency. Finally, the aboveground and underground models were accurately fitted after registration to form an integrated model.
Conclusions
The proposed method can efficiently establish an integrated open pit model. Based on the integrated model, an open pit auxiliary planning system was designed and realized. It supports the functions of mining planning and output calculation, assists users in mining planning and operation management, and improves production efficiency and management levels.
{"title":"Integrating models of real aboveground scene and underground geological structures at an open pit mine","authors":"Biao Dong , Wenjun Tan , Weichao Chang , Baoting Li , Yanliang Guo , Quanxing Hu , Guangwei Liu","doi":"10.1016/j.vrih.2023.08.004","DOIUrl":"10.1016/j.vrih.2023.08.004","url":null,"abstract":"<div><h3>Background</h3><div>As information technology has advanced and been popularized, open pit mining has rapidly developed toward integration and digitization. The three-dimensional reconstruction technology has been successfully applied to geological reconstruction and modeling of surface scenes in open pit mines. However, an integrated modeling method for surface and underground mine sites has not been reported.</div></div><div><h3>Methods</h3><div>In this study, we propose an integrated modeling method for open pit mines that fuses a real scene on the surface with an underground geological model. Based on oblique photography, a real-scene model was established on the surface. Based on the surface-stitching method proposed, the upper and lower surfaces and sides of the model were constructed in stages to construct a complete underground three-dimensional geological model, and the aboveground and underground models were registered together to build an integrated open pit mine model.</div></div><div><h3>Results</h3><div>The oblique photography method used reconstructed a surface model of an open pit mine using a real scene. The surface-stitching algorithm proposed was compared with the ball-pivoting and Poisson algorithms, and the integrity of the reconstructed model was markedly superior to that of the other two reconstruction methods. In addition, the surface-stitching algorithm was applied to the reconstruction of different formation models and showed good stability and reconstruction efficiency. Finally, the aboveground and underground models were accurately fitted after registration to form an integrated model.</div></div><div><h3>Conclusions</h3><div>The proposed method can efficiently establish an integrated open pit model. Based on the integrated model, an open pit auxiliary planning system was designed and realized. It supports the functions of mining planning and output calculation, assists users in mining planning and operation management, and improves production efficiency and management levels.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"7 4","pages":"Pages 406-420"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144903890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-08-26DOI: 10.1016/j.vrih.2024.08.004
Ehsan Shourangiz, Fatemeh Ghafari, Chao Wang
The integration of Human-Robot Collaboration (HRC) into Virtual Reality (VR) technology is transforming industries by enhancing workforce skills, improving safety, and optimizing operational processes and efficiency through realistic simulations of industry-specific scenarios. Despite the growing adoption of VR integrated with HRC, comprehensive reviews of current research in HRC-VR within the construction and manufacturing fields are lacking. This review examines the latest advances in designing and implementing HRC using VR technology in these industries. The aim is to address the application domains of HRC-VR, types of robots used, VR setups, and software solutions used. To achieve this, a systematic literature review using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses methodology was conducted on the Web of Science and Google Scholar databases, analyzing 383 articles and selecting 53 papers that met the established selection criteria. The findings emphasize a significant focus on enhancing human-robot interaction with a trend toward using immersive VR experiences and interactive 3D content creation tools. However, the integration of HRC with VR, especially in the dynamic construction environment, presents unique challenges and opportunities for future research, including developing more realistic simulations and adaptable robot systems. This paper offers insights for researchers, practitioners, educators, industry professionals, and policymakers interested in leveraging the integration of HRC with VR in construction and manufacturing industries.
将人机协作(HRC)集成到虚拟现实(VR)技术中,通过对行业特定场景的逼真模拟,提高劳动力技能、提高安全性、优化操作流程和效率,正在改变行业。尽管越来越多地采用VR与HRC相结合的技术,但目前在建筑和制造领域对HRC-VR的研究还缺乏全面的综述。本文综述了在这些行业中使用VR技术设计和实施HRC的最新进展。目的是解决HRC-VR的应用领域,使用的机器人类型,VR设置和使用的软件解决方案。为了实现这一目标,我们在Web of Science和b谷歌Scholar数据库上使用首选报告项目进行了系统文献综述和meta分析方法,分析了383篇文章,并选择了53篇符合既定选择标准的论文。研究结果强调,通过使用沉浸式VR体验和交互式3D内容创作工具的趋势,增强人机交互是一个重要的重点。然而,HRC与VR的融合,特别是在动态建筑环境中,为未来的研究带来了独特的挑战和机遇,包括开发更逼真的模拟和适应性强的机器人系统。本文为研究人员、从业人员、教育工作者、行业专业人士和政策制定者提供了见解,他们对在建筑和制造业中利用HRC与VR的集成感兴趣。
{"title":"Human-robot collaboration integrated with virtual reality in construction and manufacturing industries: A systematic review","authors":"Ehsan Shourangiz, Fatemeh Ghafari, Chao Wang","doi":"10.1016/j.vrih.2024.08.004","DOIUrl":"10.1016/j.vrih.2024.08.004","url":null,"abstract":"<div><div>The integration of Human-Robot Collaboration (HRC) into Virtual Reality (VR) technology is transforming industries by enhancing workforce skills, improving safety, and optimizing operational processes and efficiency through realistic simulations of industry-specific scenarios. Despite the growing adoption of VR integrated with HRC, comprehensive reviews of current research in HRC-VR within the construction and manufacturing fields are lacking. This review examines the latest advances in designing and implementing HRC using VR technology in these industries. The aim is to address the application domains of HRC-VR, types of robots used, VR setups, and software solutions used. To achieve this, a systematic literature review using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses methodology was conducted on the Web of Science and Google Scholar databases, analyzing 383 articles and selecting 53 papers that met the established selection criteria. The findings emphasize a significant focus on enhancing human-robot interaction with a trend toward using immersive VR experiences and interactive 3D content creation tools. However, the integration of HRC with VR, especially in the dynamic construction environment, presents unique challenges and opportunities for future research, including developing more realistic simulations and adaptable robot systems. This paper offers insights for researchers, practitioners, educators, industry professionals, and policymakers interested in leveraging the integration of HRC with VR in construction and manufacturing industries.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"7 4","pages":"Pages 317-343"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144903886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-08-26DOI: 10.1016/j.vrih.2022.08.013
Erwan Leria, Markku Makitalo, Julius Ikkala, Pekka Jääskeläinen
Stereoscopic and multiview rendering are used for virtual reality and the synthetic generation of light fields from three-dimensional scenes. Because rendering multiple views using ray tracing techniques is computationally expensive, the utilization of multiprocessor machines is necessary to achieve real-time frame rates. In this study, we propose a dynamic load-balancing algorithm for real-time multiview path tracing on multi-compute device platforms. The proposed algorithm was adapted to heterogeneous hardware combinations and dynamic scenes in real time. We show that on a heterogeneous dual-GPU platform, our implementation reduces the rendering time by an average of approximately 30%–50% compared with that of a uniform workload distribution, depending on the scene and number of views.
{"title":"Dynamic load balancing for real-time multiview path tracing on multi-GPU architectures","authors":"Erwan Leria, Markku Makitalo, Julius Ikkala, Pekka Jääskeläinen","doi":"10.1016/j.vrih.2022.08.013","DOIUrl":"10.1016/j.vrih.2022.08.013","url":null,"abstract":"<div><div>Stereoscopic and multiview rendering are used for virtual reality and the synthetic generation of light fields from three-dimensional scenes. Because rendering multiple views using ray tracing techniques is computationally expensive, the utilization of multiprocessor machines is necessary to achieve real-time frame rates. In this study, we propose a dynamic load-balancing algorithm for real-time multiview path tracing on multi-compute device platforms. The proposed algorithm was adapted to heterogeneous hardware combinations and dynamic scenes in real time. We show that on a heterogeneous dual-GPU platform, our implementation reduces the rendering time by an average of approximately 30%–50% compared with that of a uniform workload distribution, depending on the scene and number of views.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"7 4","pages":"Pages 393-405"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144903889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Interconnection of different power systems has a major effect on system stability. This study aims to design an optimal load frequency control (LFC) system based on a proportional-integral (PI) controller for a two-area power system.
Methods
Two areas were connected through an AC tie line in parallel with a DC link to stabilize the frequency of oscillations in both areas. The PI parameters were tuned using the cuckoo search algorithm (CSA) to minimize the integral absolute error (IAE). A state matrix was provided, and the stability of the system was verified by calculating the eigenvalues. The frequency response was investigated for load variation, changes in the generator rate constraint, the turbine time constant, and the governor time constant.
Results
The CSA was compared with particle swarm optimization algorithm (PSO) under identical conditions. The system was modeled based on a state-space mathematical representation and simulated using MATLAB. The results demonstrated the effectiveness of the proposed controller based on both algorithms and, it is clear that CSA is superior to PSO.
Conclusion
The CSA algorithm smoothens the system response, reduces ripples, decreases overshooting and settling time, and improves the overall system performance under different disturbances.
{"title":"Optimal load frequency control system for two-area connected via AC/DC link using cuckoo search algorithm","authors":"Gaber EL-SAADY , Alexey MIKHAYLOV , Nora BARANYAI , Mahrous AHMED , Mahmoud HEMEIDA","doi":"10.1016/j.vrih.2025.03.006","DOIUrl":"10.1016/j.vrih.2025.03.006","url":null,"abstract":"<div><h3>Background</h3><div>Interconnection of different power systems has a major effect on system stability. This study aims to design an optimal load frequency control (LFC) system based on a proportional-integral (PI) controller for a two-area power system.</div></div><div><h3>Methods</h3><div>Two areas were connected through an AC tie line in parallel with a DC link to stabilize the frequency of oscillations in both areas. The PI parameters were tuned using the cuckoo search algorithm (CSA) to minimize the integral absolute error (IAE). A state matrix was provided, and the stability of the system was verified by calculating the eigenvalues. The frequency response was investigated for load variation, changes in the generator rate constraint, the turbine time constant, and the governor time constant.</div></div><div><h3>Results</h3><div>The CSA was compared with particle swarm optimization algorithm (PSO) under identical conditions. The system was modeled based on a state-space mathematical representation and simulated using MATLAB. The results demonstrated the effectiveness of the proposed controller based on both algorithms and, it is clear that CSA is superior to PSO.</div></div><div><h3>Conclusion</h3><div>The CSA algorithm smoothens the system response, reduces ripples, decreases overshooting and settling time, and improves the overall system performance under different disturbances.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"7 3","pages":"Pages 299-316"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144491946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-01Epub Date: 2025-06-26DOI: 10.1016/j.vrih.2025.04.002
Alberto CANNAVÒ, Giorgio ARRIGO, Alessandro VISCONTI, Federico De LORENZIS, Fabrizio LAMBERTI
Background
Over the last few years, the rapid advancement of technology has led to the development of many approaches to digitalization. In this respect, metaverse provides 3D persistent virtual environments that can be used to access digital content, meet virtually, and perform several professional and leisure tasks. Among the numerous technologies supporting the metaverse, immersive Virtual Reality (VR) plays a primary role and offers highly interactive social experiences. Despite growing interest in this area, there are no clear design guidelines for creating environments tailored to the metaverse.
Methods
This study seeks to advance research in this area by moving from state-of-the-art studies on the design of immersive virtual environments in the context of metaverse and proposing how to integrate cutting-edge technologies within this context. Specifically, the best practices were identified by i) analyzing literature studies focused on human behavior in immersive virtual environments, ii) extracting common features of existing social VR platforms, and iii) conducting interviews with experts in a specific application domain. Specifically, this study considered the creation of a new virtual environment for MetaLibrary, a VR-based social platform aimed at integrating public libraries into metaverse. Several implementation challenges and additional requirements have been identified for the development of virtual environments (VEs). These elements were considered in the selection of specific cutting-edge technologies and their integration into the development process. A user study was also conducted to investigate some design aspects (namely lighting conditions and richness of the scene layout) for which deriving clear indications from the above analysis was not possible because different alternative configurations could be chosen.
Results
The work reported in this paper seeks to bridge the gap between existing VR platforms and related literature in the field, on the one hand, and requirements regarding immersive virtual environments for the metaverse, on the other hand, by reporting a set of best practices which were used to build a social virtual environment that meets users' expectations and needs.
Conclusions
Results suggest that carefully designed virtual environments can positively affect user experience and interaction within metaverse. The insights gained from this study offer valuable cues for developing immersive virtual environments for the metaverse to deliver more effective and engaging experiences.
{"title":"Designing social immersive virtual environments for the Metaverse: The case study of MetaLibrary","authors":"Alberto CANNAVÒ, Giorgio ARRIGO, Alessandro VISCONTI, Federico De LORENZIS, Fabrizio LAMBERTI","doi":"10.1016/j.vrih.2025.04.002","DOIUrl":"10.1016/j.vrih.2025.04.002","url":null,"abstract":"<div><h3>Background</h3><div>Over the last few years, the rapid advancement of technology has led to the development of many approaches to digitalization. In this respect, metaverse provides 3D persistent virtual environments that can be used to access digital content, meet virtually, and perform several professional and leisure tasks. Among the numerous technologies supporting the metaverse, immersive Virtual Reality (VR) plays a primary role and offers highly interactive social experiences. Despite growing interest in this area, there are no clear design guidelines for creating environments tailored to the metaverse.</div></div><div><h3>Methods</h3><div>This study seeks to advance research in this area by moving from state-of-the-art studies on the design of immersive virtual environments in the context of metaverse and proposing how to integrate cutting-edge technologies within this context. Specifically, the best practices were identified by i) analyzing literature studies focused on human behavior in immersive virtual environments, ii) extracting common features of existing social VR platforms, and iii) conducting interviews with experts in a specific application domain. Specifically, this study considered the creation of a new virtual environment for MetaLibrary, a VR-based social platform aimed at integrating public libraries into metaverse. Several implementation challenges and additional requirements have been identified for the development of virtual environments (VEs). These elements were considered in the selection of specific cutting-edge technologies and their integration into the development process. A user study was also conducted to investigate some design aspects (namely lighting conditions and richness of the scene layout) for which deriving clear indications from the above analysis was not possible because different alternative configurations could be chosen.</div></div><div><h3>Results</h3><div>The work reported in this paper seeks to bridge the gap between existing VR platforms and related literature in the field, on the one hand, and requirements regarding immersive virtual environments for the metaverse, on the other hand, by reporting a set of best practices which were used to build a social virtual environment that meets users' expectations and needs.</div></div><div><h3>Conclusions</h3><div>Results suggest that carefully designed virtual environments can positively affect user experience and interaction within metaverse. The insights gained from this study offer valuable cues for developing immersive virtual environments for the metaverse to deliver more effective and engaging experiences.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"7 3","pages":"Pages 279-298"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144491438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}