Simulation speed is crucial for virtual reality simulators involving real-time interactive cutting of deformable objects, such as surgical simulators. Previous efforts to accelerate these simulations resulted in significant speed increases during noncutting periods, but only moderate ones during cutting periods. This article aims to further increase the latter. Three novel methods are proposed: first, GPU-based update of mass and stiffness matrices of composite finite elements, second, GPU-based collision processing between cutting tools and deformable objects, and third, redesigned CPU-GPU synchronization mechanisms combined with GPU acceleration for the update of the surface mesh. Simulation tests, including a complex hepatectomy simulation, are performed. Results show that our methods increase the simulation speed during cutting periods by 40.4%-56.5%.
{"title":"Accelerate Cutting Tasks in Real-Time Interactive Cutting Simulation of Deformable Objects.","authors":"Shiyu Jia, Qian Dong, Zhenkuan Pan, Xiaokang Yu, Wenli Xiu, Jingli Zhang","doi":"10.1109/MCG.2025.3538985","DOIUrl":"10.1109/MCG.2025.3538985","url":null,"abstract":"<p><p>Simulation speed is crucial for virtual reality simulators involving real-time interactive cutting of deformable objects, such as surgical simulators. Previous efforts to accelerate these simulations resulted in significant speed increases during noncutting periods, but only moderate ones during cutting periods. This article aims to further increase the latter. Three novel methods are proposed: first, GPU-based update of mass and stiffness matrices of composite finite elements, second, GPU-based collision processing between cutting tools and deformable objects, and third, redesigned CPU-GPU synchronization mechanisms combined with GPU acceleration for the update of the surface mesh. Simulation tests, including a complex hepatectomy simulation, are performed. Results show that our methods increase the simulation speed during cutting periods by 40.4%-56.5%.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"66-80"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial intelligence (AI) is increasingly evolving from a tool for automating repetitive tasks to an intelligent agent actively engaging in dynamic interactions with humans. As AI becomes more integrated into collaborative contexts, it is essential to examine the factors that shape human-AI interaction. Central to this collaboration is AI agency-the capacity for action and effect-a concept that has remained largely peripheral in existing research. This article addresses this gap by proposing a comprehensive design space for reasoning about agency in human-AI collaboration. We introduce the high-level perspectives of distribution, modeling, and attribution to outline key dimensions that inform the design of agency in such systems. Our methodology combines a literature review with expert interviews to consolidate existing concepts and surface new insights. To exemplify the capacity of our framework, we reason about three mixed-initiative systems through the lens of our conceptual model. Finally, we identify future directions and critical research gaps in this emerging area.
{"title":"Toward Agency in Human-AI Collaboration.","authors":"Steffen Holter, Caterina Moruzzi, Mennatallah El-Assady","doi":"10.1109/MCG.2025.3623892","DOIUrl":"10.1109/MCG.2025.3623892","url":null,"abstract":"<p><p>Artificial intelligence (AI) is increasingly evolving from a tool for automating repetitive tasks to an intelligent agent actively engaging in dynamic interactions with humans. As AI becomes more integrated into collaborative contexts, it is essential to examine the factors that shape human-AI interaction. Central to this collaboration is AI agency-the capacity for action and effect-a concept that has remained largely peripheral in existing research. This article addresses this gap by proposing a comprehensive design space for reasoning about agency in human-AI collaboration. We introduce the high-level perspectives of distribution, modeling, and attribution to outline key dimensions that inform the design of agency in such systems. Our methodology combines a literature review with expert interviews to consolidate existing concepts and surface new insights. To exemplify the capacity of our framework, we reason about three mixed-initiative systems through the lens of our conceptual model. Finally, we identify future directions and critical research gaps in this emerging area.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"13-25"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145338241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01DOI: 10.1109/MCG.2025.3624600
Kapil Dev, Rahul C Basole, Francesco Ferrise
Nonphotorealistic rendering (NPR) has long been used to create artistic visualizations that prioritize style over realism, enabling the depiction of a wide range of aesthetic effects, from hand-drawn sketches to painterly renderings. While classical NPR methods, such as edge detection, toon shading, and geometric abstraction, have been well established in both research and practice, with a particular focus on stroke-based rendering, the recent rise of deep learning represents a paradigm shift. We analyze the similarities and differences between classical and neural network-based NPR techniques, focusing on stroke-based rendering, highlighting their strengths and limitations. We discuss tradeoffs in quality and artistic control between these paradigms and propose a framework where these approaches can be combined for new possibilities in expressive rendering.
{"title":"Hybridizing Expressive Rendering: Stroke-Based Rendering With Classic and Neural Methods.","authors":"Kapil Dev, Rahul C Basole, Francesco Ferrise","doi":"10.1109/MCG.2025.3624600","DOIUrl":"https://doi.org/10.1109/MCG.2025.3624600","url":null,"abstract":"<p><p>Nonphotorealistic rendering (NPR) has long been used to create artistic visualizations that prioritize style over realism, enabling the depiction of a wide range of aesthetic effects, from hand-drawn sketches to painterly renderings. While classical NPR methods, such as edge detection, toon shading, and geometric abstraction, have been well established in both research and practice, with a particular focus on stroke-based rendering, the recent rise of deep learning represents a paradigm shift. We analyze the similarities and differences between classical and neural network-based NPR techniques, focusing on stroke-based rendering, highlighting their strengths and limitations. We discuss tradeoffs in quality and artistic control between these paradigms and propose a framework where these approaches can be combined for new possibilities in expressive rendering.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"46 1","pages":"116-125"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146054295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Existing image-based virtual try-on methods are limited to frontal views and lack real-time performance. While per-garment virtual try-on methods have tackled these issues by adopting per-garment training, they still encounter practical limitations: (1) the robotic mannequin used for per-garment datasets collection is prohibitively expensive; (2) the synthesized garments often misalign with the human body. To address these challenges, we propose a low-barrier approach to collect per-garment datasets using real human bodies, eliminating the need for an expensive robotic mannequin and reducing data collection time from 2 hours to 2 minutes. Additionally, we introduce a hybrid person representation that ensures precise human-garment alignment. We conducted qualitative and quantitative comparisons with state-of-the-art image-based virtual try-on methods to demonstrate the superiority of our method regarding image quality and temporal consistency. Furthermore, most participants in our user study found the system effective in supporting garment purchasing decisions.
{"title":"Low-Barrier Dataset Collection With Real Human Body for Interactive Per-Garment Virtual Try-On.","authors":"Zaiqiang Wu, Yechen Li, Jingyuan Liu, Yuki Shibata, Takayuki Hori, I-Chao Shen, Takeo Igarashi","doi":"10.1109/MCG.2025.3649499","DOIUrl":"https://doi.org/10.1109/MCG.2025.3649499","url":null,"abstract":"<p><p>Existing image-based virtual try-on methods are limited to frontal views and lack real-time performance. While per-garment virtual try-on methods have tackled these issues by adopting per-garment training, they still encounter practical limitations: (1) the robotic mannequin used for per-garment datasets collection is prohibitively expensive; (2) the synthesized garments often misalign with the human body. To address these challenges, we propose a low-barrier approach to collect per-garment datasets using real human bodies, eliminating the need for an expensive robotic mannequin and reducing data collection time from 2 hours to 2 minutes. Additionally, we introduce a hybrid person representation that ensures precise human-garment alignment. We conducted qualitative and quantitative comparisons with state-of-the-art image-based virtual try-on methods to demonstrate the superiority of our method regarding image quality and temporal consistency. Furthermore, most participants in our user study found the system effective in supporting garment purchasing decisions.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145879409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-10DOI: 10.1109/MCG.2025.3642747
Jo Wood, Niamh Devane, Abi Roper, Nicola Botting, Madeline Cruice, Ulfa Octaviani, Stephanie Wilson
Current data visualization research demonstrates very limited inclusion of users with language disabilities. To address this, this paper introduces the language disabilities Developmental Language Disorder (DLD) and aphasia. We present outcomes from a novel qualitative diary study exploring whether people living with either DLD or aphasia experience and engage with data visualization in their day-to-day lives. Outcomes reveal evidence of both exposure to, and engagement with, data visualization across a week-long period alongside accompanying experiences of inclusion and exclusion of the benefits of data visualization. We report types of data visualization tasks and application domains encountered and descriptions of issues experienced by participants. Findings highlight a critical need for increased awareness of language access needs within the discipline of data visualization and a case for further research into design practices inclusive of people with language disabilities.
目前的数据可视化研究表明,包含语言障碍的用户非常有限。为此,本文介绍了发展性语言障碍(Developmental language Disorder, DLD)和失语症。我们介绍了一项新的定性日记研究的结果,该研究探讨了患有DLD或失语症的人是否在日常生活中体验并参与数据可视化。结果揭示了在为期一周的时间内,数据可视化的暴露和参与的证据,以及数据可视化的好处的包含和排除的经验。我们报告所遇到的数据可视化任务和应用领域的类型以及参与者所经历的问题的描述。研究结果强调,在数据可视化学科中,迫切需要提高对语言获取需求的认识,并需要进一步研究包括语言障碍人士在内的设计实践。
{"title":"Experiencing Data Visualization with Language Disability.","authors":"Jo Wood, Niamh Devane, Abi Roper, Nicola Botting, Madeline Cruice, Ulfa Octaviani, Stephanie Wilson","doi":"10.1109/MCG.2025.3642747","DOIUrl":"https://doi.org/10.1109/MCG.2025.3642747","url":null,"abstract":"<p><p>Current data visualization research demonstrates very limited inclusion of users with language disabilities. To address this, this paper introduces the language disabilities Developmental Language Disorder (DLD) and aphasia. We present outcomes from a novel qualitative diary study exploring whether people living with either DLD or aphasia experience and engage with data visualization in their day-to-day lives. Outcomes reveal evidence of both exposure to, and engagement with, data visualization across a week-long period alongside accompanying experiences of inclusion and exclusion of the benefits of data visualization. We report types of data visualization tasks and application domains encountered and descriptions of issues experienced by participants. Findings highlight a critical need for increased awareness of language access needs within the discipline of data visualization and a case for further research into design practices inclusive of people with language disabilities.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145727348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Real-time path tracing is computationally expensive due to intensive path sampling and shading, especially under high frame rate and high resolution demands. We present HyShare, a hybrid reuse algorithm that integrates ReSTIR-style path sample reuse with adaptive shading reuse across spatial and temporal domains. Unlike prior methods that treat reuse in isolation, HyShare jointly optimizes both reuse types, addressing their interdependencies while maintaining image fidelity. To prevent artifacts caused by stale data and correlation, we introduce per-pixel validation and dynamic refresh mechanisms. Our system adaptively disables reuse in motion-sensitive regions using radiance and geometric change checks. Evaluated on complex dynamic scenes, HyShare outperforms state-of-the-art baselines-including ReSTIR DI, ReSTIR PT, and Area ReSTIR-improving rendering speed by 37.4% and boosting image quality (PSNR +1.8 dB, SSIM +0.17). These results demonstrate the effectiveness and generalizability of HyShare in advancing real-time photorealistic rendering.
{"title":"HyShare: Hybrid Sample and Shading Reuse for Real-time Photorealistic Rendering.","authors":"Yubin Zhou, Xiyun Song, Zhiqiang Lao, Yu Guo, Zongfang Lin, Heather Yu, Liang Peng","doi":"10.1109/MCG.2025.3638242","DOIUrl":"https://doi.org/10.1109/MCG.2025.3638242","url":null,"abstract":"<p><p>Real-time path tracing is computationally expensive due to intensive path sampling and shading, especially under high frame rate and high resolution demands. We present HyShare, a hybrid reuse algorithm that integrates ReSTIR-style path sample reuse with adaptive shading reuse across spatial and temporal domains. Unlike prior methods that treat reuse in isolation, HyShare jointly optimizes both reuse types, addressing their interdependencies while maintaining image fidelity. To prevent artifacts caused by stale data and correlation, we introduce per-pixel validation and dynamic refresh mechanisms. Our system adaptively disables reuse in motion-sensitive regions using radiance and geometric change checks. Evaluated on complex dynamic scenes, HyShare outperforms state-of-the-art baselines-including ReSTIR DI, ReSTIR PT, and Area ReSTIR-improving rendering speed by 37.4% and boosting image quality (PSNR +1.8 dB, SSIM +0.17). These results demonstrate the effectiveness and generalizability of HyShare in advancing real-time photorealistic rendering.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2025-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145642820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eddies are dynamic, swirling structures in ocean circulation that significantly influence the distribution of heat, nutrients, and plankton, there by impacting marine biological processes. Accurate eddy segmentation from ocean simulation data is essential for enabling subsequent biological and physical analysis. However, leveraging vector-valued inputs, such as ocean velocity fields, in deep learning-based segmentation models poses unique challenges due to the complexity of representing the vector input in multiple combinations for training. In this paper, we discuss such challenges and provide our solutions. In particular, we present a detailed study into multiple input encoding strategies, including raw velocity components, vector magnitude, and angular direction, and their impacton eddy segmentation performance. We introduce a two-branch attention U-Net architecture that separately encodes vector magnitude and direction. We evaluate seven different network configurations across four large-scale 3D ocean simulation data sets, employing four different segmentation metrics. Our results demonstrate that the proposed two-branch architecture consistently out performs single-branch variants.
{"title":"Deep Learning-based Eddy Segmentation with Vector-Data for Biochemical Analysis in Ocean Simulations.","authors":"Weiping Hua, Sedat Ozer, Karen Bemis, Zihan Liu, Deborah Silver","doi":"10.1109/MCG.2025.3630582","DOIUrl":"https://doi.org/10.1109/MCG.2025.3630582","url":null,"abstract":"<p><p>Eddies are dynamic, swirling structures in ocean circulation that significantly influence the distribution of heat, nutrients, and plankton, there by impacting marine biological processes. Accurate eddy segmentation from ocean simulation data is essential for enabling subsequent biological and physical analysis. However, leveraging vector-valued inputs, such as ocean velocity fields, in deep learning-based segmentation models poses unique challenges due to the complexity of representing the vector input in multiple combinations for training. In this paper, we discuss such challenges and provide our solutions. In particular, we present a detailed study into multiple input encoding strategies, including raw velocity components, vector magnitude, and angular direction, and their impacton eddy segmentation performance. We introduce a two-branch attention U-Net architecture that separately encodes vector magnitude and direction. We evaluate seven different network configurations across four large-scale 3D ocean simulation data sets, employing four different segmentation metrics. Our results demonstrate that the proposed two-branch architecture consistently out performs single-branch variants.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145472533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Image-based person reidentification aims to match individuals across multiple cameras. Despite advances in machine learning, their effectiveness in real-world scenarios remains limited, often leaving users to handle fine-grained matching manually. Recent work has explored textual information as auxiliary cues, but existing methods generate coarse descriptions and fail to integrate them effectively into retrieval workflows. To address these issues, we adopt a vision-language model fine-tuned with domain-specific knowledge to generate detailed textual descriptions and keywords for pedestrian images. We then create a joint search space combining visual and textual information, using image clustering and keyword co-occurrence to build a semantic layout. In addition, we introduce a dynamic spiral word cloud algorithm to improve visual presentation and enhance semantic associations. Finally, we conduct case studies, a user study, and expert feedback, demonstrating the usability and effectiveness of our system.
{"title":"Enhancing Visual Analysis in Person Reidentification With Vision-Language Models.","authors":"Wang Xia, Tianci Wang, Jiawei Li, Guodao Sun, Haidong Gao, Xu Tan, Ronghua Liang","doi":"10.1109/MCG.2025.3593227","DOIUrl":"10.1109/MCG.2025.3593227","url":null,"abstract":"<p><p>Image-based person reidentification aims to match individuals across multiple cameras. Despite advances in machine learning, their effectiveness in real-world scenarios remains limited, often leaving users to handle fine-grained matching manually. Recent work has explored textual information as auxiliary cues, but existing methods generate coarse descriptions and fail to integrate them effectively into retrieval workflows. To address these issues, we adopt a vision-language model fine-tuned with domain-specific knowledge to generate detailed textual descriptions and keywords for pedestrian images. We then create a joint search space combining visual and textual information, using image clustering and keyword co-occurrence to build a semantic layout. In addition, we introduce a dynamic spiral word cloud algorithm to improve visual presentation and enhance semantic associations. Finally, we conduct case studies, a user study, and expert feedback, demonstrating the usability and effectiveness of our system.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"44-60"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144735526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Large language models (LLMs) show potential in understanding visualizations and may capture design knowledge. However, their ability to predict human feedback remains unclear. To explore this, we conduct three studies evaluating the alignment between LLM-based agents and human ratings in visualization tasks. The first study replicates a human-subject study, showing promising agent performance in human-like reasoning and rating, and informing further experiments. The second study simulates six prior studies using agents and finds that alignment correlates with experts' pre-experiment confidence. The third study tests enhancement techniques, such as input preprocessing and knowledge injection, revealing limitations in robustness and potential bias. These findings suggest that LLM-based agents can simulate human ratings when guided by high-confidence hypotheses from expert evaluators. We also demonstrate the usage scenario in rapid prototyping study designs and discuss future directions. We note that simulation may only serve as complements and cannot replace user studies.
{"title":"Do Language Model Agents Align With Humans in Rating Visualizations? An Empirical Study.","authors":"Zekai Shao, Yi Shan, Yixuan He, Yuxuan Yao, Junhong Wang, Xiaolong Zhang, Yu Zhang, Siming Chen","doi":"10.1109/MCG.2025.3586461","DOIUrl":"10.1109/MCG.2025.3586461","url":null,"abstract":"<p><p>Large language models (LLMs) show potential in understanding visualizations and may capture design knowledge. However, their ability to predict human feedback remains unclear. To explore this, we conduct three studies evaluating the alignment between LLM-based agents and human ratings in visualization tasks. The first study replicates a human-subject study, showing promising agent performance in human-like reasoning and rating, and informing further experiments. The second study simulates six prior studies using agents and finds that alignment correlates with experts' pre-experiment confidence. The third study tests enhancement techniques, such as input preprocessing and knowledge injection, revealing limitations in robustness and potential bias. These findings suggest that LLM-based agents can simulate human ratings when guided by high-confidence hypotheses from expert evaluators. We also demonstrate the usage scenario in rapid prototyping study designs and discuss future directions. We note that simulation may only serve as complements and cannot replace user studies.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"14-28"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144602294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01DOI: 10.1109/MCG.2025.3613129
Laura Raya, Alberto Sanchez, Carmen Martin, Jose Jesus Garcia Rueda, Erika Guijarro, Mike Potel
Surgery and hospital stays for pediatric transplantation involve frequent interventions that require complete sedation, care and self-care, disease assimilation, and anxiety for the patient. This article presents the development of a comprehensive tool called virtual transplant reality (VTR) currently used in a hospital with actual patients. Our tool is intended to provide an aid to the psychological support of children who have undergone a liver transplant. VTR consists of two applications: a virtual reality application with a head mounted display worn by the patient and a desktop application for the therapist. After tests carried out at the Hospital Universitario La Paz (Madrid, Spain) over a period of one year with 65 patients, the results indicate that our system offers a series of advantages as a complement to the psychological therapy of pediatric transplant patients.
{"title":"Enhancing Pediatric Liver Transplant Therapy With Virtual Reality.","authors":"Laura Raya, Alberto Sanchez, Carmen Martin, Jose Jesus Garcia Rueda, Erika Guijarro, Mike Potel","doi":"10.1109/MCG.2025.3613129","DOIUrl":"10.1109/MCG.2025.3613129","url":null,"abstract":"<p><p>Surgery and hospital stays for pediatric transplantation involve frequent interventions that require complete sedation, care and self-care, disease assimilation, and anxiety for the patient. This article presents the development of a comprehensive tool called virtual transplant reality (VTR) currently used in a hospital with actual patients. Our tool is intended to provide an aid to the psychological support of children who have undergone a liver transplant. VTR consists of two applications: a virtual reality application with a head mounted display worn by the patient and a desktop application for the therapist. After tests carried out at the Hospital Universitario La Paz (Madrid, Spain) over a period of one year with 65 patients, the results indicate that our system offers a series of advantages as a complement to the psychological therapy of pediatric transplant patients.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"45 6","pages":"130-140"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145497501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}