首页 > 最新文献

Computers & Graphics-Uk最新文献

英文 中文
Foreword to special section on SIBGRAPI 2025 SIBGRAPI 2025特别部分的前言
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-17 DOI: 10.1016/j.cag.2025.104526
Leonardo Sacht, Marcos Lage, Ricardo Marroquim
{"title":"Foreword to special section on SIBGRAPI 2025","authors":"Leonardo Sacht, Marcos Lage, Ricardo Marroquim","doi":"10.1016/j.cag.2025.104526","DOIUrl":"10.1016/j.cag.2025.104526","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104526"},"PeriodicalIF":2.8,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145796670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparse-to-dense light field reconstruction based on Spatial–Angular Multi-Dimensional Interaction and Guided Residual Networks 基于空间-角度多维相互作用和制导残差网络的稀疏-稠密光场重建
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-16 DOI: 10.1016/j.cag.2025.104525
Haijiao Gu, Yan Piao
Dense light fields contain rich spatial and angular information, making them highly valuable for applications such as depth estimation, 3D reconstruction, and multi-view elemental image synthesis. Light-field cameras capture both spatial and angular scene information in a single shot. However, due to high hardware requirements and substantial storage costs, practical acquisitions often yield only sparse light-field maps. To address this problem, this paper proposes an efficient end-to-end sparse-to-dense light-field reconstruction method based on Spatial–Angular Multi-Dimensional Interaction and Guided Residual Networks. The Spatial–Angular Multi-Dimensional Interaction Module (SAMDIM) fully exploits the four-dimensional structural information of light-field image data in both spatial and angular domains. It performs dual-modal interaction across spatial and angular dimensions to generate dense subviews. The channel attention mechanism within the interaction module significantly improves the image quality of these dense subviews. Finally, the Guided Residual Refinement Module (GRRM) further enhances the texture details of the generated dense subviews, enhancing the reconstruction quality of the dense light field. Experimental results demonstrate that our proposed network model achieves clear advantages over state-of-the-art methods in both visual quality and quantitative metrics on real-world datasets.
密集光场包含丰富的空间和角度信息,对深度估计、三维重建和多视点元素图像合成等应用具有重要价值。光场相机在一次拍摄中捕捉空间和角度的场景信息。然而,由于高硬件要求和大量存储成本,实际获取往往只产生稀疏的光场地图。为了解决这一问题,本文提出了一种基于空间-角度多维交互和制导残差网络的端到端稀疏到密集光场重构方法。空间-角度多维交互模块(SAMDIM)充分利用了光场图像数据在空间和角度两个领域的四维结构信息。它在空间和角度维度上执行双模态交互,以生成密集的子视图。交互模块中的通道注意机制显著提高了这些密集子视图的图像质量。最后,利用制导残差细化模块(GRRM)对生成的密集子视图的纹理细节进行进一步增强,提高了密集光场的重建质量。实验结果表明,我们提出的网络模型在现实世界数据集的视觉质量和定量指标方面都比最先进的方法有明显的优势。
{"title":"Sparse-to-dense light field reconstruction based on Spatial–Angular Multi-Dimensional Interaction and Guided Residual Networks","authors":"Haijiao Gu,&nbsp;Yan Piao","doi":"10.1016/j.cag.2025.104525","DOIUrl":"10.1016/j.cag.2025.104525","url":null,"abstract":"<div><div>Dense light fields contain rich spatial and angular information, making them highly valuable for applications such as depth estimation, 3D reconstruction, and multi-view elemental image synthesis. Light-field cameras capture both spatial and angular scene information in a single shot. However, due to high hardware requirements and substantial storage costs, practical acquisitions often yield only sparse light-field maps. To address this problem, this paper proposes an efficient end-to-end sparse-to-dense light-field reconstruction method based on Spatial–Angular Multi-Dimensional Interaction and Guided Residual Networks. The Spatial–Angular Multi-Dimensional Interaction Module (SAMDIM) fully exploits the four-dimensional structural information of light-field image data in both spatial and angular domains. It performs dual-modal interaction across spatial and angular dimensions to generate dense subviews. The channel attention mechanism within the interaction module significantly improves the image quality of these dense subviews. Finally, the Guided Residual Refinement Module (GRRM) further enhances the texture details of the generated dense subviews, enhancing the reconstruction quality of the dense light field. Experimental results demonstrate that our proposed network model achieves clear advantages over state-of-the-art methods in both visual quality and quantitative metrics on real-world datasets.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104525"},"PeriodicalIF":2.8,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145796669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Negotiating without turning: Exploring rear-space interaction for negotiated teleportation in VR 在不转弯的情况下谈判:在VR中探索谈判传送的后太空互动
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-16 DOI: 10.1016/j.cag.2025.104522
Hao-Zhong Yang , Wen-Tong Shu , Yi-Jun Li , Miao Wang
Social virtual reality enables multi-user co-presence and collaboration but introduces privacy challenges such as personal space intrusion and unwanted interruptions. Teleportation negotiation techniques help address these issues by allowing users to define teleportation-permitted zone, maintaining spatial boundaries and comfort. However, existing methods primarily focus on forward view and often require physical rotation to monitor and respond to requests originating from behind. This can disrupt immersion and reduce social presence.
To better understand these challenges, we first conducted a preliminary study to identify users’ needs for rear-space awareness during teleportation negotiation. Based on the findings, we designed two rear-awareness negotiation techniques, Window negotiation and MiniMap negotiation. These techniques display rear-space information within the forward view and allow direct interaction without excessive head movement. In a within-subjects study with 16 participants in a virtual museum, we compared these methods against a baseline front-facing approach. Results showed that MiniMap was the preferred technique, significantly improving spatial awareness, usability, and user comfort. Our findings emphasize the importance of integrating rear-space awareness in social VR negotiation systems to enhance interaction efficiency, comfort, and immersion.
社交虚拟现实支持多用户共同存在和协作,但引入了隐私挑战,如个人空间入侵和不必要的中断。隐形传态协商技术通过允许用户定义允许隐形传态的区域、保持空间边界和舒适度来帮助解决这些问题。然而,现有的方法主要关注前向视图,并且经常需要物理旋转来监视和响应来自后面的请求。这会破坏沉浸感并减少社交存在感。为了更好地理解这些挑战,我们首先进行了一项初步研究,以确定用户在隐形传态谈判期间对后太空感知的需求。在此基础上,我们设计了两种后感知谈判技术:窗口谈判和小地图谈判。这些技术在前方视野中显示后方空间信息,并允许直接交互而无需过度的头部运动。在一项有16名虚拟博物馆参与者的主题内研究中,我们将这些方法与基线的正面方法进行了比较。结果表明,MiniMap是首选技术,显著提高了空间意识、可用性和用户舒适度。我们的研究结果强调了在社交VR谈判系统中整合后空间感知对于提高交互效率、舒适度和沉浸感的重要性。
{"title":"Negotiating without turning: Exploring rear-space interaction for negotiated teleportation in VR","authors":"Hao-Zhong Yang ,&nbsp;Wen-Tong Shu ,&nbsp;Yi-Jun Li ,&nbsp;Miao Wang","doi":"10.1016/j.cag.2025.104522","DOIUrl":"10.1016/j.cag.2025.104522","url":null,"abstract":"<div><div>Social virtual reality enables multi-user co-presence and collaboration but introduces privacy challenges such as personal space intrusion and unwanted interruptions. Teleportation negotiation techniques help address these issues by allowing users to define teleportation-permitted zone, maintaining spatial boundaries and comfort. However, existing methods primarily focus on forward view and often require physical rotation to monitor and respond to requests originating from behind. This can disrupt immersion and reduce social presence.</div><div>To better understand these challenges, we first conducted a preliminary study to identify users’ needs for rear-space awareness during teleportation negotiation. Based on the findings, we designed two rear-awareness negotiation techniques, Window negotiation and MiniMap negotiation. These techniques display rear-space information within the forward view and allow direct interaction without excessive head movement. In a within-subjects study with 16 participants in a virtual museum, we compared these methods against a baseline front-facing approach. Results showed that MiniMap was the preferred technique, significantly improving spatial awareness, usability, and user comfort. Our findings emphasize the importance of integrating rear-space awareness in social VR negotiation systems to enhance interaction efficiency, comfort, and immersion.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104522"},"PeriodicalIF":2.8,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145796672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Locomotion in CAVE: Enhancing immersion through full-body motion CAVE中的运动:通过全身运动增强沉浸感
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-06 DOI: 10.1016/j.cag.2025.104510
Xiaohui Li , Xiaolong Liu , Zhongchen Shi , Wei Chen , Liang Xie , Meng Gai , Jun Cao , Suxia Zhang , Erwei Yin
Cave Automatic Virtual Environment (CAVE) is one of the virtual reality (VR) immersive devices currently used to present virtual environments. However, the locomotion methods in the CAVE are limited by unnatural interaction methods, severely hindering the user experience and immersion in the CAVE. We proposed a locomotion framework for CAVE environments aimed at enhancing the immersive locomotion experience through optimized human motion recognition technology. Firstly, we construct a four-sided display CAVE system, then through the dynamic method based on Perspective-n-Point to calibrate the camera, using the obtained camera intrinsics and extrinsic parameters, and an action recognition architecture to get the action category. At last, transform the action category to a graphical workstation that renders display effects on the screen. We designed a user study to validate the effectiveness of our method. Compared to the traditional methods, our method has significant improvements in realness and self-presence in the virtual environment, effectively reducing motion sickness.
洞穴自动虚拟环境(Cave Automatic Virtual Environment,简称Cave)是目前用于呈现虚拟环境的虚拟现实(VR)沉浸式设备之一。然而,CAVE中的运动方式受到非自然交互方式的限制,严重阻碍了用户在CAVE中的体验和沉浸感。我们提出了一个洞穴环境的运动框架,旨在通过优化人体运动识别技术来增强沉浸式运动体验。首先,我们构建了一个四面显示的CAVE系统,然后通过基于Perspective-n-Point的动态方法对摄像机进行标定,利用得到的摄像机的内在参数和外在参数,以及动作识别体系结构得到动作类别。最后,将动作类别转换为在屏幕上呈现显示效果的图形工作站。我们设计了一个用户研究来验证我们方法的有效性。与传统方法相比,我们的方法在虚拟环境中的真实感和自我存在性方面有显著提高,有效地减少了晕动病。
{"title":"Locomotion in CAVE: Enhancing immersion through full-body motion","authors":"Xiaohui Li ,&nbsp;Xiaolong Liu ,&nbsp;Zhongchen Shi ,&nbsp;Wei Chen ,&nbsp;Liang Xie ,&nbsp;Meng Gai ,&nbsp;Jun Cao ,&nbsp;Suxia Zhang ,&nbsp;Erwei Yin","doi":"10.1016/j.cag.2025.104510","DOIUrl":"10.1016/j.cag.2025.104510","url":null,"abstract":"<div><div>Cave Automatic Virtual Environment (CAVE) is one of the virtual reality (VR) immersive devices currently used to present virtual environments. However, the locomotion methods in the CAVE are limited by unnatural interaction methods, severely hindering the user experience and immersion in the CAVE. We proposed a locomotion framework for CAVE environments aimed at enhancing the immersive locomotion experience through optimized human motion recognition technology. Firstly, we construct a four-sided display CAVE system, then through the dynamic method based on Perspective-n-Point to calibrate the camera, using the obtained camera intrinsics and extrinsic parameters, and an action recognition architecture to get the action category. At last, transform the action category to a graphical workstation that renders display effects on the screen. We designed a user study to validate the effectiveness of our method. Compared to the traditional methods, our method has significant improvements in realness and self-presence in the virtual environment, effectively reducing motion sickness.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104510"},"PeriodicalIF":2.8,"publicationDate":"2025-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145796671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HARDER: 3D human avatar reconstruction with distillation and explicit representation 难度:具有蒸馏和显式表示的3D人类化身重建
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-05 DOI: 10.1016/j.cag.2025.104512
Chun-Hau Yu, Yu-Hsiang Chen, Cheng-Yen Yu, Li-Chen Fu
3D human avatar reconstruction has become a popular research field in recent years. Although many studies have shown remarkable results, most existing methods either impose overly strict data requirements, such as depth information or multi-view images, or suffer from significant performance drops in specific areas. To address these challenges, we propose HARDER. We combine the Score Distillation Sampling (SDS) technique with the designed modules, Feature-Specific Image Captioning (FSIC) and RADR (Region-Aware Differentiable Rendering), allowing the Latent Diffusion Model (LDM) to guide the reconstruction process, especially in unseen regions. Furthermore, we have developed various training strategies, including personalized LDM, delayed SDS, focused SDS, and multi-pose SDS, to make the training process more efficient.
Our avatars use an explicit representation that is compatible with modern computer graphics pipelines. Also, the entire reconstruction and real-time animation process can be completed on a single consumer-grade GPU, making this application more accessible.
近年来,三维人体虚拟形象重建已成为一个热门的研究领域。尽管许多研究已经取得了显著的成果,但现有的方法要么对数据要求过于严格,如深度信息或多视图图像,要么在特定领域性能下降明显。为了应对这些挑战,我们建议更加努力。我们将分数蒸馏采样(SDS)技术与设计的模块,特定特征图像标注(FSIC)和区域感知可微分渲染(RADR)相结合,允许潜在扩散模型(LDM)指导重建过程,特别是在看不见的区域。此外,我们还开发了多种训练策略,包括个性化LDM、延迟SDS、集中SDS和多姿态SDS,以提高训练过程的效率。我们的虚拟形象使用与现代计算机图形管道兼容的显式表示。此外,整个重建和实时动画过程可以在单个消费级GPU上完成,使此应用程序更易于访问。
{"title":"HARDER: 3D human avatar reconstruction with distillation and explicit representation","authors":"Chun-Hau Yu,&nbsp;Yu-Hsiang Chen,&nbsp;Cheng-Yen Yu,&nbsp;Li-Chen Fu","doi":"10.1016/j.cag.2025.104512","DOIUrl":"10.1016/j.cag.2025.104512","url":null,"abstract":"<div><div>3D human avatar reconstruction has become a popular research field in recent years. Although many studies have shown remarkable results, most existing methods either impose overly strict data requirements, such as depth information or multi-view images, or suffer from significant performance drops in specific areas. To address these challenges, we propose HARDER. We combine the Score Distillation Sampling (SDS) technique with the designed modules, Feature-Specific Image Captioning (FSIC) and RADR (Region-Aware Differentiable Rendering), allowing the Latent Diffusion Model (LDM) to guide the reconstruction process, especially in unseen regions. Furthermore, we have developed various training strategies, including personalized LDM, delayed SDS, focused SDS, and multi-pose SDS, to make the training process more efficient.</div><div>Our avatars use an explicit representation that is compatible with modern computer graphics pipelines. Also, the entire reconstruction and real-time animation process can be completed on a single consumer-grade GPU, making this application more accessible.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104512"},"PeriodicalIF":2.8,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145796668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NeRVis: Neural Radiance Field Model-Uncertainty Visualization 神经辐射场模型-不确定性可视化
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1016/j.cag.2025.104511
Kirsten W.H. Maas , Thiam-Wai Chua , Danny Ruijters , Nicola Pezzotti , Anna Vilanova
Neural Radiance Field (NeRF) is a promising deep learning approach for three-dimensional (3D) scene reconstruction and view synthesis, with various applications in fields like robotics and medical imaging. However, similar to other deep learning models, understanding NeRF model inaccuracies and their causes is challenging. The 3D nature of NeRFs further adds challenges such as identifying complex geometrical features and analyzing 2D views that suffer from object occlusions. Existing methods for uncertainty quantification (UQ) in NeRFs address the lack of NeRF model understanding by expressing uncertainty in model predictions, exposing limitations in model design or training data. However, these UQ techniques typically rely on quantitative evaluation that does not facilitate human interpretation. We introduce NeRVis, a visual analytics system that supports model users to explore and analyze uncertainty in NeRF scenes. NeRVis combines spatial uncertainty analysis with per-view uncertainty summaries, fostering analysis of the uncertainty in Lambertian NeRF scenes. As a proof-of-concept, we illustrate our approach using two UQ methods. We demonstrate the effectiveness of NeRVis with two different use scenarios, tackling key challenges in the NeRF UQ literature.
神经辐射场(NeRF)是一种很有前途的用于三维(3D)场景重建和视图合成的深度学习方法,在机器人和医学成像等领域有各种应用。然而,与其他深度学习模型类似,理解NeRF模型的不准确性及其原因是具有挑战性的。nerf的3D特性进一步增加了识别复杂几何特征和分析受物体遮挡影响的2D视图等挑战。现有的NeRF不确定性量化(UQ)方法通过表达模型预测中的不确定性来解决NeRF模型理解不足的问题,暴露了模型设计或训练数据的局限性。然而,这些UQ技术通常依赖于不便于人类解释的定量评估。我们介绍NeRVis,这是一个可视化分析系统,支持模型用户探索和分析NeRF场景中的不确定性。NeRVis将空间不确定性分析与每个视图的不确定性总结相结合,促进了Lambertian NeRF场景的不确定性分析。作为概念验证,我们使用两个UQ方法来说明我们的方法。我们通过两种不同的使用场景展示了NeRVis的有效性,解决了NeRF UQ文献中的关键挑战。
{"title":"NeRVis: Neural Radiance Field Model-Uncertainty Visualization","authors":"Kirsten W.H. Maas ,&nbsp;Thiam-Wai Chua ,&nbsp;Danny Ruijters ,&nbsp;Nicola Pezzotti ,&nbsp;Anna Vilanova","doi":"10.1016/j.cag.2025.104511","DOIUrl":"10.1016/j.cag.2025.104511","url":null,"abstract":"<div><div>Neural Radiance Field (NeRF) is a promising deep learning approach for three-dimensional (3D) scene reconstruction and view synthesis, with various applications in fields like robotics and medical imaging. However, similar to other deep learning models, understanding NeRF model inaccuracies and their causes is challenging. The 3D nature of NeRFs further adds challenges such as identifying complex geometrical features and analyzing 2D views that suffer from object occlusions. Existing methods for uncertainty quantification (UQ) in NeRFs address the lack of NeRF model understanding by expressing uncertainty in model predictions, exposing limitations in model design or training data. However, these UQ techniques typically rely on quantitative evaluation that does not facilitate human interpretation. We introduce NeRVis, a visual analytics system that supports model users to explore and analyze uncertainty in NeRF scenes. NeRVis combines spatial uncertainty analysis with per-view uncertainty summaries, fostering analysis of the uncertainty in Lambertian NeRF scenes. As a proof-of-concept, we illustrate our approach using two UQ methods. We demonstrate the effectiveness of NeRVis with two different use scenarios, tackling key challenges in the NeRF UQ literature.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104511"},"PeriodicalIF":2.8,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foreword to the special section on recent advances in graphics and interaction (RAGI 2025) 关于图形和交互的最新进展的特别部分的前言(RAGI 2025)
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1016/j.cag.2025.104509
Tomás Alves, José Creissac Campos, Alan Chalmers
{"title":"Foreword to the special section on recent advances in graphics and interaction (RAGI 2025)","authors":"Tomás Alves,&nbsp;José Creissac Campos,&nbsp;Alan Chalmers","doi":"10.1016/j.cag.2025.104509","DOIUrl":"10.1016/j.cag.2025.104509","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104509"},"PeriodicalIF":2.8,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145796673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive multiresolution exemplar-based texture synthesis on animated fluids 基于动画流体的自适应多分辨率纹理合成
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-03 DOI: 10.1016/j.cag.2025.104490
Julián E. Guzmán , David Mould , Eric Paquette
We propose an approach to synthesize textures for the animated free surfaces of fluids. Because fluids deform and experience topological changes, it is challenging to maintain fidelity to a reference texture exemplar while avoiding visual artifacts such as distortion and discontinuities. We introduce an adaptive multiresolution synthesis approach that balances fidelity to the exemplar and consistency with the fluid motion. Given a 2D exemplar texture, an orientation field from the first frame, an animated velocity field, and polygonal meshes corresponding to the animated liquid, our approach advects the texture and the orientation field across frames, yielding a coherent sequence of textures conforming to the per-frame geometry. Our adaptiveness relies on local 2D and 3D distortion measures, which guide multiresolution decisions to resynthesize or preserve the advected content. We prevent popping artifacts by enforcing gradual changes in color over time. Our approach works well both on slow-moving liquids and on turbulent ones with splashes. In addition, we demonstrate good performance on a variety of stationary texture exemplars.
提出了一种合成流体动态自由表面纹理的方法。由于流体会变形并经历拓扑变化,因此在保持参考纹理样本的保真度的同时避免视觉上的失真和不连续是一项挑战。我们引入了一种自适应多分辨率综合方法,平衡了对范例的保真度和与流体运动的一致性。给定一个2D范例纹理,一个来自第一帧的方向场,一个动画速度场,以及对应于动画液体的多边形网格,我们的方法将纹理和方向场平铺在帧之间,产生符合每帧几何结构的连贯纹理序列。我们的适应性依赖于局部2D和3D失真措施,这些措施指导多分辨率决策来重新合成或保留平流内容。我们通过强制颜色随时间逐渐变化来防止弹出伪影。我们的方法对缓慢流动的液体和有飞溅的湍流液体都很有效。此外,我们在各种静止纹理样本上证明了良好的性能。
{"title":"Adaptive multiresolution exemplar-based texture synthesis on animated fluids","authors":"Julián E. Guzmán ,&nbsp;David Mould ,&nbsp;Eric Paquette","doi":"10.1016/j.cag.2025.104490","DOIUrl":"10.1016/j.cag.2025.104490","url":null,"abstract":"<div><div>We propose an approach to synthesize textures for the animated free surfaces of fluids. Because fluids deform and experience topological changes, it is challenging to maintain fidelity to a reference texture exemplar while avoiding visual artifacts such as distortion and discontinuities. We introduce an adaptive multiresolution synthesis approach that balances fidelity to the exemplar and consistency with the fluid motion. Given a 2D exemplar texture, an orientation field from the first frame, an animated velocity field, and polygonal meshes corresponding to the animated liquid, our approach advects the texture and the orientation field across frames, yielding a coherent sequence of textures conforming to the per-frame geometry. Our adaptiveness relies on local 2D and 3D distortion measures, which guide multiresolution decisions to resynthesize or preserve the advected content. We prevent popping artifacts by enforcing gradual changes in color over time. Our approach works well both on slow-moving liquids and on turbulent ones with splashes. In addition, we demonstrate good performance on a variety of stationary texture exemplars.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104490"},"PeriodicalIF":2.8,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time haptic-based soft body suturing in virtual open surgery simulations 虚拟开放手术模拟中基于触觉的实时软体缝合
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-02 DOI: 10.1016/j.cag.2025.104507
George Westergaard , Mark Ellis , Jacob Barker , Sofia Garces Palacios , Alexis Desir , Ganesh Sankaranarayanan , Suvranu De , Doga Demirel
In this work, we present a real-time virtual reality-based open surgery simulator that enables realistic soft-tissue suturing with bimanual haptic feedback. Our system uses eXtended Position-Based Dynamics (XPBD) for soft body and suture thread simulation, allowing stable real-time physics for complex interactions like continuous sutures and knot tying. In tests with all four common suturing techniques, purse-string, Connell, stay, and Lembert, the simulator maintained high frame rates (50–80 FPS) with up to 4155 simulated particles, demonstrating consistent real-time performance. As part of our work, we conducted a user study using our suturing simulator, where 24 surgical trainees and experts used the Virtual Colorectal Surgery Trainer – Rectal Prolapse simulator. The user study showed that 71% of participants (n=17) rated the anatomical realism as moderate to very high. Half (n=12) found the force feedback realistic, and 54% (n=13) participants found the force feedback useful, indicating effective immersion while also highlighting the need for improved haptic fidelity. Overall, the simulation provides a low-cost, high-fidelity training platform for open surgical suturing, addressing a critical gap in current virtual reality educational tools.
在这项工作中,我们提出了一个基于实时虚拟现实的开放手术模拟器,可以通过双手触觉反馈实现真实的软组织缝合。我们的系统使用扩展的基于位置的动力学(XPBD)进行软体和缝线模拟,为连续缝合和打结等复杂的相互作用提供了稳定的实时物理效果。在对四种常用缝合技术(wallet -string、Connell、stay和Lembert)的测试中,该模拟器可以模拟4155个粒子,保持高帧率(50-80 FPS),显示出一致的实时性能。作为我们工作的一部分,我们使用我们的缝合模拟器进行了一项用户研究,其中24名外科培训生和专家使用了虚拟结直肠手术培训师-直肠脱垂模拟器。用户研究显示,71%的参与者(n=17)将解剖真实感评为中等至非常高。一半(n=12)的参与者认为力反馈是真实的,54% (n=13)的参与者认为力反馈是有用的,这表明了有效的沉浸感,同时也强调了提高触觉保真度的必要性。总体而言,该模拟为开放式手术缝合提供了低成本,高保真度的培训平台,解决了当前虚拟现实教育工具的关键空白。
{"title":"Real-time haptic-based soft body suturing in virtual open surgery simulations","authors":"George Westergaard ,&nbsp;Mark Ellis ,&nbsp;Jacob Barker ,&nbsp;Sofia Garces Palacios ,&nbsp;Alexis Desir ,&nbsp;Ganesh Sankaranarayanan ,&nbsp;Suvranu De ,&nbsp;Doga Demirel","doi":"10.1016/j.cag.2025.104507","DOIUrl":"10.1016/j.cag.2025.104507","url":null,"abstract":"<div><div>In this work, we present a real-time virtual reality-based open surgery simulator that enables realistic soft-tissue suturing with bimanual haptic feedback. Our system uses eXtended Position-Based Dynamics (XPBD) for soft body and suture thread simulation, allowing stable real-time physics for complex interactions like continuous sutures and knot tying. In tests with all four common suturing techniques, purse-string, Connell, stay, and Lembert, the simulator maintained high frame rates (50–80 FPS) with up to 4155 simulated particles, demonstrating consistent real-time performance. As part of our work, we conducted a user study using our suturing simulator, where 24 surgical trainees and experts used the Virtual Colorectal Surgery Trainer – Rectal Prolapse simulator. The user study showed that 71% of participants (n=17) rated the anatomical realism as moderate to very high. Half (n=12) found the force feedback realistic, and 54% (n=13) participants found the force feedback useful, indicating effective immersion while also highlighting the need for improved haptic fidelity. Overall, the simulation provides a low-cost, high-fidelity training platform for open surgical suturing, addressing a critical gap in current virtual reality educational tools.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104507"},"PeriodicalIF":2.8,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial Note for Issue 133 of Computers & Graphics 《计算机与图形学》第133期的编辑说明
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-01 DOI: 10.1016/j.cag.2025.104508
{"title":"Editorial Note for Issue 133 of Computers & Graphics","authors":"","doi":"10.1016/j.cag.2025.104508","DOIUrl":"10.1016/j.cag.2025.104508","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104508"},"PeriodicalIF":2.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145684788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers & Graphics-Uk
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1