Pub Date : 2026-02-01Epub Date: 2026-03-14DOI: 10.1016/j.vrih.2025.12.003
Dandan LIU
Optical coherence tomography (OCT), particularly Swept-Source OCT, is widely employed in medical diagnostics and industrial inspections owing to its high-resolution imaging capabilities. However, Swept-Source OCT 3D imaging often suffers from stripe artifacts caused by unstable light sources, system noise, and environmental interference, posing challenges to real-time processing of large-scale datasets. To address this issue, this study introduces a real-time reconstruction system that integrates stripe-artifact suppression and parallel computing using a graphics processing unit. This approach employs a frequency-domain filtering algorithm with adaptive anti-suppression parameters, dynamically adjusted through an image quality evaluation function and optimized using a convolutional neural network for complex frequency-domain feature learning. Additionally, a graphics processing unit integrated 3D reconstruction framework is developed, enhancing data processing throughput and real-time performance via a dual-queue decoupling mechanism. Experimental results demonstrate significant improvements in structural similarity (0.92), peak signal-to-noise ratio (31.62 dB), and stripe suppression ratio (15.73 dB) compared with existing methods. On the RTX 4090 platform, the proposed system achieved an end-to-end delay of 94.36 milliseconds, a frame rate of 10.3 frames per second, and a throughput of 121.5 million voxels per second, effectively suppressing artifacts while preserving image details and enhancing real-time 3D reconstruction performance.
{"title":"Enhancing SS-OCT 3D image reconstruction: A real-time system with stripe artifact suppression and GPU parallel acceleration","authors":"Dandan LIU","doi":"10.1016/j.vrih.2025.12.003","DOIUrl":"10.1016/j.vrih.2025.12.003","url":null,"abstract":"<div><div>Optical coherence tomography (OCT), particularly Swept-Source OCT, is widely employed in medical diagnostics and industrial inspections owing to its high-resolution imaging capabilities. However, Swept-Source OCT 3D imaging often suffers from stripe artifacts caused by unstable light sources, system noise, and environmental interference, posing challenges to real-time processing of large-scale datasets. To address this issue, this study introduces a real-time reconstruction system that integrates stripe-artifact suppression and parallel computing using a graphics processing unit. This approach employs a frequency-domain filtering algorithm with adaptive anti-suppression parameters, dynamically adjusted through an image quality evaluation function and optimized using a convolutional neural network for complex frequency-domain feature learning. Additionally, a graphics processing unit integrated 3D reconstruction framework is developed, enhancing data processing throughput and real-time performance via a dual-queue decoupling mechanism. Experimental results demonstrate significant improvements in structural similarity (0.92), peak signal-to-noise ratio (31.62 dB), and stripe suppression ratio (15.73 dB) compared with existing methods. On the RTX 4090 platform, the proposed system achieved an end-to-end delay of 94.36 milliseconds, a frame rate of 10.3 frames per second, and a throughput of 121.5 million voxels per second, effectively suppressing artifacts while preserving image details and enhancing real-time 3D reconstruction performance.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"8 1","pages":"Pages 115-130"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147454104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2026-03-14DOI: 10.1016/j.vrih.2025.12.002
Yuanyuan WANG , Dawei LU , Jingfan FAN , Deqiang XIAO , Danni AI , Tianyu FU , Yucong LIN , Long SHAO , Tao CHEN , Hong SONG , Yongtian WANG , Jian YANG
Surgical navigation has evolved significantly through advances in augmented reality, virtual reality, and mixed reality, improving precision and safety across many clinical applications, including neurosurgery, maxillofacial, spinal, and arthroplasty procedures. By integrating preoperative imaging with real-time intraoperative data, these systems provide dynamic guidance, reduce radiation exposure, and minimize tissue damage. Key challenges persist, including intraoperative registration accuracy, flexible tissue deformation, respiratory compensation, and real-time imaging quality. Emerging solutions include artificial intelligence-driven segmentation, deformation-field modeling, and hybrid registration techniques. Future developments will include lightweight, portable systems, improved non-rigid registration algorithms, and greater clinical adoption. Despite advances in rigid-tissue applications, soft-tissue navigation requires additional innovation to address motion variability and registration reliability, ultimately advancing minimally invasive surgery and precision medicine.
{"title":"Augmented reality surgical navigation: Clinical applications, key technologies, and future directions","authors":"Yuanyuan WANG , Dawei LU , Jingfan FAN , Deqiang XIAO , Danni AI , Tianyu FU , Yucong LIN , Long SHAO , Tao CHEN , Hong SONG , Yongtian WANG , Jian YANG","doi":"10.1016/j.vrih.2025.12.002","DOIUrl":"10.1016/j.vrih.2025.12.002","url":null,"abstract":"<div><div>Surgical navigation has evolved significantly through advances in augmented reality, virtual reality, and mixed reality, improving precision and safety across many clinical applications, including neurosurgery, maxillofacial, spinal, and arthroplasty procedures. By integrating preoperative imaging with real-time intraoperative data, these systems provide dynamic guidance, reduce radiation exposure, and minimize tissue damage. Key challenges persist, including intraoperative registration accuracy, flexible tissue deformation, respiratory compensation, and real-time imaging quality. Emerging solutions include artificial intelligence-driven segmentation, deformation-field modeling, and hybrid registration techniques. Future developments will include lightweight, portable systems, improved non-rigid registration algorithms, and greater clinical adoption. Despite advances in rigid-tissue applications, soft-tissue navigation requires additional innovation to address motion variability and registration reliability, ultimately advancing minimally invasive surgery and precision medicine.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"8 1","pages":"Pages 1-27"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147454101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2026-03-14DOI: 10.1016/j.vrih.2026.01.001
Zhiqi HUANG , Deqiang XIAO , Hongxun LIU , Long SHAO , Danni AI , Jingfan FAN , Tianyu FU , Yucong LIN , Hong SONG , Jian YANG
Background
Computed tomography (CT) and cone-beam computed tomography (CBCT) image registration play pivotal roles in computer-assisted navigation for orthopedic surgery. Traditional methods often apply uniform deformation models, neglecting the biomechanical differences between rigid structures and soft tissues, which compromises registration accuracy, especially during significant bone displacements.
Method
To address this issue, we introduce RE-Reg, a rigid-elastic CT-CBCT image registration framework that jointly learns rigid bone motion and soft tissue deformation. RE-Reg incorporates a rigid alignment (RA) module to estimate global bone motion and an elastic deformation (ED) module to model soft tissue deformation, preserving bony structures through bone shape preservation (BSP) loss.
Result
Our comprehensive evaluation on publicly available datasets demonstrates that RE-Reg significantly outperforms existing methods in terms of registration accuracy and rigid bone structure preservation, achieving a 1.3% improvement in Dice similarity coefficient (DSC) and a 23% reduction in rigid bone deformation () compared with the best baseline.
Conclusion
This framework not only enhances anatomical fidelity but also ensures biomechanical plausibility and provides a valuable tool for image-guided orthopedic surgery. This code is available at https://github.com/Zq-Huang/RE-Reg.
{"title":"Enhanced CT-CBCT image registration for orthopedic surgery: Integrating rigid-elastic motion models","authors":"Zhiqi HUANG , Deqiang XIAO , Hongxun LIU , Long SHAO , Danni AI , Jingfan FAN , Tianyu FU , Yucong LIN , Hong SONG , Jian YANG","doi":"10.1016/j.vrih.2026.01.001","DOIUrl":"10.1016/j.vrih.2026.01.001","url":null,"abstract":"<div><h3>Background</h3><div>Computed tomography (CT) and cone-beam computed tomography (CBCT) image registration play pivotal roles in computer-assisted navigation for orthopedic surgery. Traditional methods often apply uniform deformation models, neglecting the biomechanical differences between rigid structures and soft tissues, which compromises registration accuracy, especially during significant bone displacements.</div></div><div><h3>Method</h3><div>To address this issue, we introduce RE-Reg, a rigid-elastic CT-CBCT image registration framework that jointly learns rigid bone motion and soft tissue deformation. RE-Reg incorporates a rigid alignment (RA) module to estimate global bone motion and an elastic deformation (ED) module to model soft tissue deformation, preserving bony structures through bone shape preservation (BSP) loss.</div></div><div><h3>Result</h3><div>Our comprehensive evaluation on publicly available datasets demonstrates that RE-Reg significantly outperforms existing methods in terms of registration accuracy and rigid bone structure preservation, achieving a 1.3% improvement in Dice similarity coefficient (DSC) and a 23% reduction in rigid bone deformation (<span><math><mrow><mo>%</mo><mo>Δ</mo><mtext>vol</mtext></mrow></math></span>) compared with the best baseline.</div></div><div><h3>Conclusion</h3><div>This framework not only enhances anatomical fidelity but also ensures biomechanical plausibility and provides a valuable tool for image-guided orthopedic surgery. This code is available at https://github.com/Zq-Huang/RE-Reg.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"8 1","pages":"Pages 87-100"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147454028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2026-03-14DOI: 10.1016/j.vrih.2026.01.002
David Bamidele OLAWADE , Ezenwa Robinson MODUM , Olabanke Florence OLAWUYI , Omobolaji Rosemary OLASILOLA , Babajide David MAKANJUOLA , John Oluwatosin ALABI
Digital twin technology, that creates virtual replicas of physical entities using real-time data and simulation models, has emerged as a transformative innovation across multiple healthcare domains. Its application in physiotherapy and rehabilitation represents a paradigm shift from traditional therapeutic approaches to personalized data-driven interventions that optimize patient outcomes. This narrative review examines the current applications, benefits, challenges, and future prospects of digital twin technology in physiotherapy and rehabilitation, providing a comprehensive analysis of the manner in which this technology is reshaping clinical practice and patient care. A narrative review approach was employed, systematically searching PubMed, IEEE Xplore, Scopus, and Web of Science databases. Studies describing digital twin applications, development methodologies, clinical implementations, and theoretical frameworks in physiotherapy and rehabilitation contexts were included. Digital twin technology demonstrates significant potential in personalizing rehabilitation programs, enabling real-time monitoring of patient progress, predicting treatment outcomes, and facilitating remote therapeutic interventions. Current applications span musculoskeletal rehabilitation, neurological recovery, post-surgical care, and sports injury management. Key benefits include enhanced treatment precision, improved patient engagement, reduced healthcare costs, and accelerated recovery times. However, implementation faces challenges including technological complexity, data privacy concerns, interoperability issues, and the need for substantial infrastructure investment. Digital twin technology represents a promising frontier in physiotherapy and rehabilitation, offering unprecedented opportunities for personalized, efficient, and effective patient care. Successful integration requires addressing the current limitations while fostering interdisciplinary collaboration between clinicians, engineers, and data scientists.
数字孪生技术使用实时数据和仿真模型创建物理实体的虚拟副本,已成为跨多个医疗保健领域的变革性创新。它在物理治疗和康复中的应用代表了从传统治疗方法到个性化数据驱动干预的范式转变,从而优化患者的结果。本文综述了数字孪生技术在物理治疗和康复中的应用、益处、挑战和未来前景,并对该技术重塑临床实践和患者护理的方式进行了全面分析。采用叙述性综述方法,系统地检索PubMed、IEEE explore、Scopus和Web of Science数据库。研究描述了数字双胞胎在物理治疗和康复环境中的应用、开发方法、临床实施和理论框架。数字孪生技术在个性化康复计划、实时监测患者进展、预测治疗结果和促进远程治疗干预方面显示出巨大的潜力。目前的应用范围包括肌肉骨骼康复、神经恢复、术后护理和运动损伤管理。主要优点包括提高治疗精度、提高患者参与度、降低医疗成本和加快恢复时间。然而,实施面临着包括技术复杂性、数据隐私问题、互操作性问题和大量基础设施投资需求在内的挑战。数字孪生技术代表了物理治疗和康复的一个有前途的前沿,为个性化,高效和有效的患者护理提供了前所未有的机会。成功的整合需要解决当前的局限性,同时促进临床医生、工程师和数据科学家之间的跨学科合作。
{"title":"The role of digital twin technology in physiotherapy and rehabilitation practice","authors":"David Bamidele OLAWADE , Ezenwa Robinson MODUM , Olabanke Florence OLAWUYI , Omobolaji Rosemary OLASILOLA , Babajide David MAKANJUOLA , John Oluwatosin ALABI","doi":"10.1016/j.vrih.2026.01.002","DOIUrl":"10.1016/j.vrih.2026.01.002","url":null,"abstract":"<div><div>Digital twin technology, that creates virtual replicas of physical entities using real-time data and simulation models, has emerged as a transformative innovation across multiple healthcare domains. Its application in physiotherapy and rehabilitation represents a paradigm shift from traditional therapeutic approaches to personalized data-driven interventions that optimize patient outcomes. This narrative review examines the current applications, benefits, challenges, and future prospects of digital twin technology in physiotherapy and rehabilitation, providing a comprehensive analysis of the manner in which this technology is reshaping clinical practice and patient care. A narrative review approach was employed, systematically searching PubMed, IEEE Xplore, Scopus, and Web of Science databases. Studies describing digital twin applications, development methodologies, clinical implementations, and theoretical frameworks in physiotherapy and rehabilitation contexts were included. Digital twin technology demonstrates significant potential in personalizing rehabilitation programs, enabling real-time monitoring of patient progress, predicting treatment outcomes, and facilitating remote therapeutic interventions. Current applications span musculoskeletal rehabilitation, neurological recovery, post-surgical care, and sports injury management. Key benefits include enhanced treatment precision, improved patient engagement, reduced healthcare costs, and accelerated recovery times. However, implementation faces challenges including technological complexity, data privacy concerns, interoperability issues, and the need for substantial infrastructure investment. Digital twin technology represents a promising frontier in physiotherapy and rehabilitation, offering unprecedented opportunities for personalized, efficient, and effective patient care. Successful integration requires addressing the current limitations while fostering interdisciplinary collaboration between clinicians, engineers, and data scientists.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"8 1","pages":"Pages 71-86"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147454102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2026-03-14DOI: 10.1016/j.vrih.2023.08.009
Chi Weng MA , Ruien SHEN , Deli DONG , Shuangjiu XIAO
Background
3D botanical tree reconstruction from a single image plays a vital role in the field of computer graphics. However, accurately capturing the intricate branching patterns and detailed morphologies of trees remains a challenge.
Methods
In this study, we proposed a novel approach for single-image tree reconstruction using a conditional generative adversarial network to infer the 3D skeleton of a tree in the form of a 2D skeleton depth map. Based on the 2D skeleton depth map, a corresponding branching structure (3D skeleton) that inherits the tree shape in the input image and leaves can be generated using a procedural modeling technique.
Result
Experimental results show that the proposed method accurately reconstructs diverse tree structures across species. Both quantitative and qualitative evaluations demonstrate improved skeleton completeness, branching accuracy, and visual realism over baseline methods, while requiring no user input.
Conclusions
Our proposed approach for generating lifelike 3D tree models from a single image with no user input shows its proficiency in achieving efficient and reliable reconstruction. These results showcase the capability of the proposed model to recreate complex tree architectures while capturing their visual authenticity.
{"title":"Botanical tree reconstruction from a single image via 3D GAN-based skeletonization","authors":"Chi Weng MA , Ruien SHEN , Deli DONG , Shuangjiu XIAO","doi":"10.1016/j.vrih.2023.08.009","DOIUrl":"10.1016/j.vrih.2023.08.009","url":null,"abstract":"<div><h3>Background</h3><div>3D botanical tree reconstruction from a single image plays a vital role in the field of computer graphics. However, accurately capturing the intricate branching patterns and detailed morphologies of trees remains a challenge.</div></div><div><h3>Methods</h3><div>In this study, we proposed a novel approach for single-image tree reconstruction using a conditional generative adversarial network to infer the 3D skeleton of a tree in the form of a 2D skeleton depth map. Based on the 2D skeleton depth map, a corresponding branching structure (3D skeleton) that inherits the tree shape in the input image and leaves can be generated using a procedural modeling technique.</div></div><div><h3>Result</h3><div>Experimental results show that the proposed method accurately reconstructs diverse tree structures across species. Both quantitative and qualitative evaluations demonstrate improved skeleton completeness, branching accuracy, and visual realism over baseline methods, while requiring no user input.</div></div><div><h3>Conclusions</h3><div>Our proposed approach for generating lifelike 3D tree models from a single image with no user input shows its proficiency in achieving efficient and reliable reconstruction. These results showcase the capability of the proposed model to recreate complex tree architectures while capturing their visual authenticity.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"8 1","pages":"Pages 101-114"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147454103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2026-03-14DOI: 10.1016/j.vrih.2025.12.004
Shuo WANG , Pengju ZHANG , Yihong WU
LiDAR and camera are two of the most common sensors used in the fields of robot perception, autonomous driving, augmented reality, and virtual reality, where these sensors are widely used to perform various tasks such as odometry estimation and 3D reconstruction. Fusing the information from these two sensors can significantly increase the robustness and accuracy of these perception tasks. The extrinsic calibration between cameras and LiDAR is a fundamental prerequisite for multimodal systems. Recently, extensive studies have been conducted on the calibration of extrinsic parameters. Although several calibration methods facilitate sensor fusion, a comprehensive summary for researchers and, especially, non-expert users is lacking. Thus, we present an overview of extrinsic calibration and discuss diverse calibration methods from the perspective of calibration system design. Based on the calibration information sources, this study classifies these methods as target-based or targetless. For each type of calibration method, further classification was performed according to the diverse types of features or constraints used in the calibration process, and their detailed implementations and key characteristics were introduced. Thereafter, calibration-accuracy evaluation methods are presented. Finally, we comprehensively compare the advantages and disadvantages of each calibration method and suggest directions for practical applications and future research.
{"title":"Review of extrinsic parameter calibration of LiDAR and camera","authors":"Shuo WANG , Pengju ZHANG , Yihong WU","doi":"10.1016/j.vrih.2025.12.004","DOIUrl":"10.1016/j.vrih.2025.12.004","url":null,"abstract":"<div><div>LiDAR and camera are two of the most common sensors used in the fields of robot perception, autonomous driving, augmented reality, and virtual reality, where these sensors are widely used to perform various tasks such as odometry estimation and 3D reconstruction. Fusing the information from these two sensors can significantly increase the robustness and accuracy of these perception tasks. The extrinsic calibration between cameras and LiDAR is a fundamental prerequisite for multimodal systems. Recently, extensive studies have been conducted on the calibration of extrinsic parameters. Although several calibration methods facilitate sensor fusion, a comprehensive summary for researchers and, especially, non-expert users is lacking. Thus, we present an overview of extrinsic calibration and discuss diverse calibration methods from the perspective of calibration system design. Based on the calibration information sources, this study classifies these methods as target-based or targetless. For each type of calibration method, further classification was performed according to the diverse types of features or constraints used in the calibration process, and their detailed implementations and key characteristics were introduced. Thereafter, calibration-accuracy evaluation methods are presented. Finally, we comprehensively compare the advantages and disadvantages of each calibration method and suggest directions for practical applications and future research.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"8 1","pages":"Pages 28-70"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147454100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-11-06DOI: 10.1016/j.vrih.2025.08.003
Xiao HU , Xiaolong WU , Mingcong MA , Xiang XU , Yiping GU , Gaoyuan WANG , Yanning XU , Xiangxu MENG , Lu WANG
With technological advancements, virtual reality (VR), once limited to high-end professional applications, is rapidly expanding into entertainment and broader consumer domains. However, the inherent contradiction between mobile hardware computing power and the demand for high-resolution, high-refresh-rate rendering has intensified, leading to critical bottlenecks, including frame latency and power overload, which constrain large-scale applications of VR systems. This study systematically analyzes four key technologies for efficient VR rendering: (1) foveated rendering, which dynamically reduces rendering precision in peripheral regions based on the physiological characteristics of the human visual system (HVS), thereby significantly decreasing graphics computation load; (2) stereo rendering, optimized through consistent stereo rendering acceleration algorithms; (3) cloud rendering, utilizing object-based decomposition and illumination-based decomposition for distributed resource scheduling; and (4) low-power rendering, integrating parameter-optimized rendering, super-resolution technology, and frame-generation technology to enhance mobile energy efficiency. Through a systematic review of the core principles and optimization approaches of these technologies, this study establishes research benchmarks for developing efficient VR systems that achieve high fidelity and low latency while providing further theoretical support for the engineering implementation and industrial advancement of VR rendering technologies.
{"title":"Efficient VR rendering: Survey on foveated, stereo, cloud, and low-power rendering techniques","authors":"Xiao HU , Xiaolong WU , Mingcong MA , Xiang XU , Yiping GU , Gaoyuan WANG , Yanning XU , Xiangxu MENG , Lu WANG","doi":"10.1016/j.vrih.2025.08.003","DOIUrl":"10.1016/j.vrih.2025.08.003","url":null,"abstract":"<div><div>With technological advancements, virtual reality (VR), once limited to high-end professional applications, is rapidly expanding into entertainment and broader consumer domains. However, the inherent contradiction between mobile hardware computing power and the demand for high-resolution, high-refresh-rate rendering has intensified, leading to critical bottlenecks, including frame latency and power overload, which constrain large-scale applications of VR systems. This study systematically analyzes four key technologies for efficient VR rendering: (1) foveated rendering, which dynamically reduces rendering precision in peripheral regions based on the physiological characteristics of the human visual system (HVS), thereby significantly decreasing graphics computation load; (2) stereo rendering, optimized through consistent stereo rendering acceleration algorithms; (3) cloud rendering, utilizing object-based decomposition and illumination-based decomposition for distributed resource scheduling; and (4) low-power rendering, integrating parameter-optimized rendering, super-resolution technology, and frame-generation technology to enhance mobile energy efficiency. Through a systematic review of the core principles and optimization approaches of these technologies, this study establishes research benchmarks for developing efficient VR systems that achieve high fidelity and low latency while providing further theoretical support for the engineering implementation and industrial advancement of VR rendering technologies.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"7 5","pages":"Pages 421-452"},"PeriodicalIF":0.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145449300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-11-06DOI: 10.1016/j.vrih.2025.08.001
Xinming XU , Haoxuan LI , Zhouyu GUAN , Dian ZENG , Qingqing ZHENG , Yiming QIN , Yang WEN , Huating LI , Chwee Teck LIM , Tien Yin WONG , Enhua WU , Weiping JIA , Bin SHENG
The convergence of large language models (LLMs) and virtual reality (VR) technologies has led to significant breakthroughs across multiple domains, particularly in healthcare and medicine. Owing to its immersive and interactive capabilities, VR technology has demonstrated exceptional utility in surgical simulation, rehabilitation, physical therapy, mental health, and psychological treatment. By creating highly realistic and precisely controlled environments, VR not only enhances the efficiency of medical training but also enables personalized therapeutic approaches for patients. The convergence of LLMs and VR extends the potential of both technologies. LLM-empowered VR can transform medical education through interactive learning platforms and address complex healthcare challenges using comprehensive solutions. This convergence enhances the quality of training, decision-making, and patient engagement, paving the way for innovative healthcare delivery. This study aims to comprehensively review the current applications, research advancements, and challenges associated with these two technologies in healthcare and medicine. The rapid evolution of these technologies is driving the healthcare industry toward greater intelligence and precision, establishing them as critical forces in the transformation of modern medicine.
{"title":"Urgent needs, opportunities and challenges of virtual reality in healthcare and medicine in the era of large language models","authors":"Xinming XU , Haoxuan LI , Zhouyu GUAN , Dian ZENG , Qingqing ZHENG , Yiming QIN , Yang WEN , Huating LI , Chwee Teck LIM , Tien Yin WONG , Enhua WU , Weiping JIA , Bin SHENG","doi":"10.1016/j.vrih.2025.08.001","DOIUrl":"10.1016/j.vrih.2025.08.001","url":null,"abstract":"<div><div>The convergence of large language models (LLMs) and virtual reality (VR) technologies has led to significant breakthroughs across multiple domains, particularly in healthcare and medicine. Owing to its immersive and interactive capabilities, VR technology has demonstrated exceptional utility in surgical simulation, rehabilitation, physical therapy, mental health, and psychological treatment. By creating highly realistic and precisely controlled environments, VR not only enhances the efficiency of medical training but also enables personalized therapeutic approaches for patients. The convergence of LLMs and VR extends the potential of both technologies. LLM-empowered VR can transform medical education through interactive learning platforms and address complex healthcare challenges using comprehensive solutions. This convergence enhances the quality of training, decision-making, and patient engagement, paving the way for innovative healthcare delivery. This study aims to comprehensively review the current applications, research advancements, and challenges associated with these two technologies in healthcare and medicine. The rapid evolution of these technologies is driving the healthcare industry toward greater intelligence and precision, establishing them as critical forces in the transformation of modern medicine.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"7 5","pages":"Pages 453-467"},"PeriodicalIF":0.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145449298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-11-06DOI: 10.1016/j.vrih.2025.08.002
Zhiqi XU , Yuyong ZHAO , Jie WANG , Jian CHANG , Yuetian ZHANG
Background
Autism spectrum disorder (ASD) is a pervasive developmental disorder characterized by difficulties in social communication and restricted, repetitive behaviors. Early intervention is essential to improve developmental outcomes in children with ASD. Serious games, which combine educational objectives with game-based interactions, have shown potential as tools for early intervention in patients with ASD. However, in China, the development of serious games specifically designed for children with ASD remains in its infancy, with significant gaps in technical frameworks and effective data management methods.
Method
This paper proposes a framework aimed at facilitating the development of multimodal serious games designed for ASD interventions. We demonstrated the feasibility of the framework by developing and integrating several components, such as web applications, mobile games, and augmented reality games. These tools are interconnected to achieve data connectivity and management. Additionally, adaptive mechanics were employed within the framework to analyze real-time player data, which allowed the game difficulty to be dynamically adjusted and provide a personalized experience for each child.Results The framework successfully integrated various multimodal games, ensuring that real-time data management supported personalized game experiences. This approach ensured that the interventions remained appropriately challenging while still achievable.
Conclusion
The results indicate that the proposed framework enhances collaboration among therapists, parents, and developers while also improving the effectiveness of ASD interventions. By delivering personalized gameplay experiences that are both challenging and achievable, the framework offers a scalable platform for the future development of serious games.
{"title":"Framework for adaptive multimodal serious games for early intervention of autistic children","authors":"Zhiqi XU , Yuyong ZHAO , Jie WANG , Jian CHANG , Yuetian ZHANG","doi":"10.1016/j.vrih.2025.08.002","DOIUrl":"10.1016/j.vrih.2025.08.002","url":null,"abstract":"<div><h3>Background</h3><div>Autism spectrum disorder (ASD) is a pervasive developmental disorder characterized by difficulties in social communication and restricted, repetitive behaviors. Early intervention is essential to improve developmental outcomes in children with ASD. Serious games, which combine educational objectives with game-based interactions, have shown potential as tools for early intervention in patients with ASD. However, in China, the development of serious games specifically designed for children with ASD remains in its infancy, with significant gaps in technical frameworks and effective data management methods.</div></div><div><h3>Method</h3><div>This paper proposes a framework aimed at facilitating the development of multimodal serious games designed for ASD interventions. We demonstrated the feasibility of the framework by developing and integrating several components, such as web applications, mobile games, and augmented reality games. These tools are interconnected to achieve data connectivity and management. Additionally, adaptive mechanics were employed within the framework to analyze real-time player data, which allowed the game difficulty to be dynamically adjusted and provide a personalized experience for each child.<strong>Results</strong> The framework successfully integrated various multimodal games, ensuring that real-time data management supported personalized game experiences. This approach ensured that the interventions remained appropriately challenging while still achievable.</div></div><div><h3>Conclusion</h3><div>The results indicate that the proposed framework enhances collaboration among therapists, parents, and developers while also improving the effectiveness of ASD interventions. By delivering personalized gameplay experiences that are both challenging and achievable, the framework offers a scalable platform for the future development of serious games.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"7 5","pages":"Pages 523-542"},"PeriodicalIF":0.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145448547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vibrotactile feedback systems are widely used in assistive technology, wearable devices, and virtual environments to deliver precise tactile information. The timing of interstimulus intervals (ISIs) plays a critical role in determining how accurately users perceive and interpret vibrotactile patterns. The optimal use of ISIs can increase the effectiveness of these systems, improve user interaction, and enable reliable, intuitive feedback in diverse applications. We examined how different interstimulus intervals ISIs impact the accuracy of vibrotactile pattern recognition.
Methods
Participants wore a forearm-mounted device with six voice coil actuators arranged in a 3 × 2 grid, delivering Braille-based vibrotactile patterns sequentially at ISIs ranging from 10 to 2500 ms. Eight participants performed identification tasks involving Icelandic Braille patterns categorized as either short (2–3 actuators) or long (4–5 actuators). A repeated measures ANOVA was conducted to assess the effects of ISI, pattern type, and practice (across two testing blocks) on pattern recognition accuracy.
Results
For short patterns, accuracy was highest (92%–98%) at ISIs of 50–700 ms, with peak performance at 300 ms. For long patterns, accuracy reached 86%–94% at ISIs of 100–500 ms, peaking at 400 ms. Participants were more accurate with short patterns, and performance improved significantly over time for both short and long patterns, highlighting the importance of training for vibrotactile pattern recognition.
Conclusions
These results underscore the importance of careful selection of ISIs in vibrotactile feedback systems for accurate pattern identification. The findings provide valuable insights for conveying tactile information using wearable devices, contributing to better tactile feedback and performance in applications requiring precise vibrotactile information delivery.
{"title":"Vibrotactile pattern recognition:Influence of interstimulus intervals","authors":"Nashmin YEGANEH , Ivan MAKAROV , Árni KRISTJÁNSSON , Runar UNNTHORSSON","doi":"10.1016/j.vrih.2025.06.001","DOIUrl":"10.1016/j.vrih.2025.06.001","url":null,"abstract":"<div><h3>Background</h3><div>Vibrotactile feedback systems are widely used in assistive technology, wearable devices, and virtual environments to deliver precise tactile information. The timing of interstimulus intervals (ISIs) plays a critical role in determining how accurately users perceive and interpret vibrotactile patterns. The optimal use of ISIs can increase the effectiveness of these systems, improve user interaction, and enable reliable, intuitive feedback in diverse applications. We examined how different interstimulus intervals ISIs impact the accuracy of vibrotactile pattern recognition.</div></div><div><h3>Methods</h3><div>Participants wore a forearm-mounted device with six voice coil actuators arranged in a 3 × 2 grid, delivering Braille-based vibrotactile patterns sequentially at ISIs ranging from 10 to 2500 ms. Eight participants performed identification tasks involving Icelandic Braille patterns categorized as either short (2–3 actuators) or long (4–5 actuators). A repeated measures ANOVA was conducted to assess the effects of ISI, pattern type, and practice (across two testing blocks) on pattern recognition accuracy.</div></div><div><h3>Results</h3><div>For short patterns, accuracy was highest (92%–98%) at ISIs of 50–700 ms, with peak performance at 300 ms. For long patterns, accuracy reached 86%–94% at ISIs of 100–500 ms, peaking at 400 ms. Participants were more accurate with short patterns, and performance improved significantly over time for both short and long patterns, highlighting the importance of training for vibrotactile pattern recognition.</div></div><div><h3>Conclusions</h3><div>These results underscore the importance of careful selection of ISIs in vibrotactile feedback systems for accurate pattern identification. The findings provide valuable insights for conveying tactile information using wearable devices, contributing to better tactile feedback and performance in applications requiring precise vibrotactile information delivery.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"7 5","pages":"Pages 483-500"},"PeriodicalIF":0.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145449301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}