首页 > 最新文献

Computers & Graphics-Uk最新文献

英文 中文
Consistency-preserving Gaussian splatting for block-based large-scale scene reconstruction 基于块的大规模场景重建中保持一致性的高斯溅射
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-01 Epub Date: 2025-11-24 DOI: 10.1016/j.cag.2025.104493
Mengyi Wang, Beiqi Chen, Shengfang Pan, Niansheng Liu, Jinhe Su
Efficient and high-quality reconstruction of large-scale 3D scenes remains a key challenge for novel view synthesis. Recent advances in 3D Gaussian Splatting (3DGS) have achieved photorealistic rendering and real-time performance, but scaling 3DGS to city-scale environments typically relies on block-based training. This divide-and-conquer approach suffers from two major limitations: (1) the Gaussian properties of overlapping regions of adjacent blocks are inconsistent, resulting in noticeable visual artifacts after merging; (2) the sparse Gaussian distribution near block boundaries causes cracks or holes. To address these challenges, we propose a novel framework that regularizes the Gaussian properties of overlapping regions and enhances the Gaussian density near block edges, thus ensuring smooth transitions and seamless rendering. In addition, we introduce appearance decouple to further adapt to viewpoint-dependent appearance variations in urban scenes and adopt a multi-scale densification strategy to balance details and efficiency at different scene scales. Experimental results show that in large-scale urban scenes with densely partitioned blocks, our method achieves consistently better reconstruction quality, with an average PSNR improvement of 0.25 dB over strong baselines on both aerial and street datasets.
高效、高质量的大规模三维场景重建仍然是新视图合成的关键挑战。3D高斯飞溅(3DGS)的最新进展已经实现了逼真的渲染和实时性能,但将3DGS扩展到城市规模的环境通常依赖于基于块的训练。这种分而治之的方法有两个主要的局限性:(1)相邻块重叠区域的高斯属性不一致,合并后会产生明显的视觉伪影;(2)块边界附近的稀疏高斯分布导致裂纹或孔洞。为了解决这些挑战,我们提出了一种新的框架,该框架规范了重叠区域的高斯属性,增强了块边缘附近的高斯密度,从而确保平滑过渡和无缝渲染。此外,我们引入了外观解耦,以进一步适应城市场景中依赖视点的外观变化,并采用多尺度致密化策略来平衡不同场景尺度下的细节和效率。实验结果表明,在具有密集分区的大规模城市场景中,我们的方法在空中和街道数据集的强基线上平均PSNR提高了0.25 dB,获得了更好的重建质量。
{"title":"Consistency-preserving Gaussian splatting for block-based large-scale scene reconstruction","authors":"Mengyi Wang,&nbsp;Beiqi Chen,&nbsp;Shengfang Pan,&nbsp;Niansheng Liu,&nbsp;Jinhe Su","doi":"10.1016/j.cag.2025.104493","DOIUrl":"10.1016/j.cag.2025.104493","url":null,"abstract":"<div><div>Efficient and high-quality reconstruction of large-scale 3D scenes remains a key challenge for novel view synthesis. Recent advances in 3D Gaussian Splatting (3DGS) have achieved photorealistic rendering and real-time performance, but scaling 3DGS to city-scale environments typically relies on block-based training. This divide-and-conquer approach suffers from two major limitations: (1) the Gaussian properties of overlapping regions of adjacent blocks are inconsistent, resulting in noticeable visual artifacts after merging; (2) the sparse Gaussian distribution near block boundaries causes cracks or holes. To address these challenges, we propose a novel framework that regularizes the Gaussian properties of overlapping regions and enhances the Gaussian density near block edges, thus ensuring smooth transitions and seamless rendering. In addition, we introduce appearance decouple to further adapt to viewpoint-dependent appearance variations in urban scenes and adopt a multi-scale densification strategy to balance details and efficiency at different scene scales. Experimental results show that in large-scale urban scenes with densely partitioned blocks, our method achieves consistently better reconstruction quality, with an average PSNR improvement of 0.25 dB over strong baselines on both aerial and street datasets.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104493"},"PeriodicalIF":2.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145625036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CSDA-Vis: A (What-If-and-When) visual system for early dropout detection using counterfactual and survival analysis interactions CSDA-Vis:一个使用反事实和生存分析交互作用的早期辍学检测的(如果和何时)视觉系统
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-01 Epub Date: 2025-11-25 DOI: 10.1016/j.cag.2025.104489
Germain Garcia-Zanabria , Daniel A. Gutierrez-Pachas , Jorge Poco , Erick Gomez-Nieto
Student dropout is a major concern for universities, leading them to invest heavily in strategies to lower attrition rates. Analytical tools are crucial for predicting dropout risks and informing policies on academic and social support. However, many of these tools depend solely on automate tu d predictions, ignoring valuable insights from professors, mentors, and specialists. These experts can help identify the root causes of dropout and develop effective interventions. This paper introduces CSDA-Vis, a visualization system designed to analyze the influence of individual, institutional, and socioeconomic factors on student dropout rates. CSDA-Vis facilitates the identification of actionable strategies to mitigate dropout by integrating counterfactual and survival analysis methods. Unlike traditional approaches, our tool enables decision-makers to incorporate their expertise into the evaluation of different dropout scenarios. Developed in collaboration with domain experts, CSDA-Vis builds upon previous visualization tools and was validated through a case study using real datasets from a Latin American university. Additionally, we conducted an expert evaluation with professionals specializing in dropout analysis, further demonstrating the tool’s practical value and effectiveness.
学生辍学是大学的一个主要问题,导致他们在降低流失率的策略上投入大量资金。分析工具对于预测辍学风险和为学术和社会支持政策提供信息至关重要。然而,这些工具中的许多只依赖于自动预测,而忽略了来自教授、导师和专家的有价值的见解。这些专家可以帮助确定辍学的根本原因,并制定有效的干预措施。本文介绍了CSDA-Vis,一个可视化系统,旨在分析个人,机构和社会经济因素对学生辍学率的影响。CSDA-Vis通过整合反事实和生存分析方法,促进确定减少辍学的可行战略。与传统方法不同,我们的工具使决策者能够将他们的专业知识纳入不同辍学情景的评估中。CSDA-Vis是与领域专家合作开发的,以以前的可视化工具为基础,并通过使用来自拉丁美洲大学的真实数据集的案例研究进行了验证。此外,我们还与专门从事辍学分析的专业人士进行了专家评估,进一步证明了该工具的实用价值和有效性。
{"title":"CSDA-Vis: A (What-If-and-When) visual system for early dropout detection using counterfactual and survival analysis interactions","authors":"Germain Garcia-Zanabria ,&nbsp;Daniel A. Gutierrez-Pachas ,&nbsp;Jorge Poco ,&nbsp;Erick Gomez-Nieto","doi":"10.1016/j.cag.2025.104489","DOIUrl":"10.1016/j.cag.2025.104489","url":null,"abstract":"<div><div>Student dropout is a major concern for universities, leading them to invest heavily in strategies to lower attrition rates. Analytical tools are crucial for predicting dropout risks and informing policies on academic and social support. However, many of these tools depend solely on automate tu d predictions, ignoring valuable insights from professors, mentors, and specialists. These experts can help identify the root causes of dropout and develop effective interventions. This paper introduces <em>CSDA-Vis</em>, a visualization system designed to analyze the influence of individual, institutional, and socioeconomic factors on student dropout rates. CSDA-Vis facilitates the identification of actionable strategies to mitigate dropout by integrating counterfactual and survival analysis methods. Unlike traditional approaches, our tool enables decision-makers to incorporate their expertise into the evaluation of different dropout scenarios. Developed in collaboration with domain experts, CSDA-Vis builds upon previous visualization tools and was validated through a case study using real datasets from a Latin American university. Additionally, we conducted an expert evaluation with professionals specializing in dropout analysis, further demonstrating the tool’s practical value and effectiveness.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104489"},"PeriodicalIF":2.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145625037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Situated visualization towards manufacturing maintenance training: Scoping review, design and user study 面向制造维修培训的可视化:范围审查,设计和用户研究
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-01 Epub Date: 2025-11-24 DOI: 10.1016/j.cag.2025.104500
Zeinab BagheriFard , Miruna Maria Vasiliu , Emma Jane Pretty , Luis Quintero , Benjamin Edvinsson , Mario Romero , Renan Guarese
Immersive technologies offer advantages for the visualization of and interaction with complex setups within manufacturing maintenance processes. The present work catalogs different applications of AR/VR in manufacturing maintenance practices as an extended version of a workshop paper presented at the International Workshop on eXtended Reality for Industrial and Occupational Supports (XRIOS). Through a scoping review in three computing and engineering digital libraries, we outline the key attributes of immersive solutions (Np=115) for industrial maintenance, categorizing functional prototypes with ten parameters related to interaction, visualization, and research methods. Moreover, we conducted a workshop with three manufacturing experts discussing the future of maintenance interfaces. By bringing forth their recommendations and insights, we targeted a key training challenge in maintenance. We designed and implemented a situated visualization prototype for a safety-critical procedure with real-time, in-depth, spatially relevant instructions. We compared the effects of 2D labels and 3D ghosts in a VR-simulated AR environment. In a preliminary between-subjects evaluation study (Nu=24), we measured usability, workload, simulator sickness, completion time, and delayed recall. Although we did not find statistically significant differences between conditions, 3D ghosts showed slightly lower perceived workload and discomfort levels, along with shorter completion times. On the other hand, 2D labels produced higher usability. Overall, we contribute by a mapping out the state of the art and discovering knowledge gaps within immersive maintenance, presenting a design and preliminary user study that adheres to our recommendations.
沉浸式技术为制造维护过程中复杂设置的可视化和交互提供了优势。本工作将AR/VR在制造维护实践中的不同应用作为在工业和职业支持扩展现实国际研讨会(XRIOS)上发表的研讨会论文的扩展版本。通过对三个计算和工程数字图书馆的范围审查,我们概述了用于工业维护的沉浸式解决方案(Np=115)的关键属性,并将功能原型与交互、可视化和研究方法相关的十个参数进行了分类。此外,我们还举办了一个研讨会,邀请三位制造业专家讨论维护接口的未来。通过提出他们的建议和见解,我们瞄准了维护中的关键培训挑战。我们设计并实现了一个具有实时、深入、空间相关指令的安全关键程序的位置可视化原型。我们在vr模拟的AR环境中比较了2D标签和3D幽灵的效果。在初步的受试者间评估研究中(Nu=24),我们测量了可用性、工作量、模拟器不适、完成时间和延迟回忆。虽然我们在不同条件下没有发现统计学上的显著差异,但3D幽灵显示出较低的感知工作量和不适程度,以及较短的完成时间。另一方面,2D标签产生了更高的可用性。总的来说,我们的贡献是绘制出最先进的状态,发现沉浸式维护中的知识差距,提出符合我们建议的设计和初步用户研究。
{"title":"Situated visualization towards manufacturing maintenance training: Scoping review, design and user study","authors":"Zeinab BagheriFard ,&nbsp;Miruna Maria Vasiliu ,&nbsp;Emma Jane Pretty ,&nbsp;Luis Quintero ,&nbsp;Benjamin Edvinsson ,&nbsp;Mario Romero ,&nbsp;Renan Guarese","doi":"10.1016/j.cag.2025.104500","DOIUrl":"10.1016/j.cag.2025.104500","url":null,"abstract":"<div><div>Immersive technologies offer advantages for the visualization of and interaction with complex setups within manufacturing maintenance processes. The present work catalogs different applications of AR/VR in manufacturing maintenance practices as an extended version of a workshop paper presented at the International Workshop on eXtended Reality for Industrial and Occupational Supports (XRIOS). Through a scoping review in three computing and engineering digital libraries, we outline the key attributes of immersive solutions (<span><math><mrow><msub><mrow><mi>N</mi></mrow><mrow><mi>p</mi></mrow></msub><mo>=</mo><mn>115</mn></mrow></math></span>) for industrial maintenance, categorizing functional prototypes with ten parameters related to interaction, visualization, and research methods. Moreover, we conducted a workshop with three manufacturing experts discussing the future of maintenance interfaces. By bringing forth their recommendations and insights, we targeted a key training challenge in maintenance. We designed and implemented a situated visualization prototype for a safety-critical procedure with real-time, in-depth, spatially relevant instructions. We compared the effects of 2D labels and 3D ghosts in a VR-simulated AR environment. In a preliminary between-subjects evaluation study (<span><math><mrow><msub><mrow><mi>N</mi></mrow><mrow><mi>u</mi></mrow></msub><mo>=</mo><mn>24</mn></mrow></math></span>), we measured usability, workload, simulator sickness, completion time, and delayed recall. Although we did not find statistically significant differences between conditions, 3D ghosts showed slightly lower perceived workload and discomfort levels, along with shorter completion times. On the other hand, 2D labels produced higher usability. Overall, we contribute by a mapping out the state of the art and discovering knowledge gaps within immersive maintenance, presenting a design and preliminary user study that adheres to our recommendations.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104500"},"PeriodicalIF":2.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145625039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Incorporating strafing gain into redirected walking with pose score guidance 结合扫射增益到重定向行走与姿态得分指导
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-01 Epub Date: 2025-11-19 DOI: 10.1016/j.cag.2025.104492
Jin-Feng Li , Sen-Zhe Xu , Qiang Tong , Peng-Hui Yuan , Ling-Long Zou , Er-Xia Luo , Qi Wen Gan , Song-Hai Zhang
Redirected walking (RDW) is a virtual reality locomotion technique that enables users to explore large virtual environments within a limited physical space. While state-of-the-art methods based on physical trajectory planning make effective use of physical space, some of them often compromise user comfort due to frequent directional reversals in curvature gain. To address this, this paper proposes a novel RDW method that integrates strafing gain with pose score guidance. Our approach discretizes the physical space into a series of standard poses, each with a long-term safety score, and redirects the user toward the optimal pose. The main contribution is a path generation algorithm that decomposes redirection into two sequential stages to ensure stable gains for each planned path: it first uses the curvature gain to steer the user along an arc for orientation alignment, and then inserts a straight path segment with constant strafing gain to achieve positional alignment with the target pose. Simulation experiments demonstrate a reduction in resets, while the user study shows lower Simulator Sickness Questionnaire scores compared to previous methods. Our work explores the potential of combining novel gains with state-of-the-art methods to create a more effective and comfortable RDW controller algorithm.
重定向行走(RDW)是一种虚拟现实运动技术,使用户能够在有限的物理空间内探索大型虚拟环境。虽然基于物理轨迹规划的最先进的方法可以有效地利用物理空间,但由于曲率增益的频繁方向反转,其中一些方法往往会损害用户的舒适度。为了解决这一问题,本文提出了一种将扫射增益与姿态分数制导相结合的RDW方法。我们的方法将物理空间离散为一系列标准姿势,每个姿势都有一个长期的安全评分,并将用户重新定向到最佳姿势。主要贡献是一种路径生成算法,该算法将重定向分解为两个连续的阶段,以确保每个规划路径的稳定增益:它首先使用曲率增益引导用户沿着弧线进行方向对齐,然后插入具有恒定定向增益的直线路径段,以实现与目标姿态的位置对齐。模拟实验证明了重置的减少,而用户研究表明,与以前的方法相比,模拟器疾病问卷得分更低。我们的工作探索了将新增益与最先进的方法相结合的潜力,以创建更有效和舒适的RDW控制器算法。
{"title":"Incorporating strafing gain into redirected walking with pose score guidance","authors":"Jin-Feng Li ,&nbsp;Sen-Zhe Xu ,&nbsp;Qiang Tong ,&nbsp;Peng-Hui Yuan ,&nbsp;Ling-Long Zou ,&nbsp;Er-Xia Luo ,&nbsp;Qi Wen Gan ,&nbsp;Song-Hai Zhang","doi":"10.1016/j.cag.2025.104492","DOIUrl":"10.1016/j.cag.2025.104492","url":null,"abstract":"<div><div>Redirected walking (RDW) is a virtual reality locomotion technique that enables users to explore large virtual environments within a limited physical space. While state-of-the-art methods based on physical trajectory planning make effective use of physical space, some of them often compromise user comfort due to frequent directional reversals in curvature gain. To address this, this paper proposes a novel RDW method that integrates strafing gain with pose score guidance. Our approach discretizes the physical space into a series of standard poses, each with a long-term safety score, and redirects the user toward the optimal pose. The main contribution is a path generation algorithm that decomposes redirection into two sequential stages to ensure stable gains for each planned path: it first uses the curvature gain to steer the user along an arc for orientation alignment, and then inserts a straight path segment with constant strafing gain to achieve positional alignment with the target pose. Simulation experiments demonstrate a reduction in resets, while the user study shows lower Simulator Sickness Questionnaire scores compared to previous methods. Our work explores the potential of combining novel gains with state-of-the-art methods to create a more effective and comfortable RDW controller algorithm.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104492"},"PeriodicalIF":2.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145580258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foreword to special section on SIBGRAPI 2025 SIBGRAPI 2025特别部分的前言
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-01 Epub Date: 2025-12-17 DOI: 10.1016/j.cag.2025.104526
Leonardo Sacht, Marcos Lage, Ricardo Marroquim
{"title":"Foreword to special section on SIBGRAPI 2025","authors":"Leonardo Sacht,&nbsp;Marcos Lage,&nbsp;Ricardo Marroquim","doi":"10.1016/j.cag.2025.104526","DOIUrl":"10.1016/j.cag.2025.104526","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104526"},"PeriodicalIF":2.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145796670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foreword to the special section on the 29th international ACM conference on 3D web technology 第29届国际ACM 3D web技术会议专题部分的前言
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-01 Epub Date: 2025-12-17 DOI: 10.1016/j.cag.2025.104523
A. Augusto de Sousa , Miguel Angel Guevara López , Traian Lavric
{"title":"Foreword to the special section on the 29th international ACM conference on 3D web technology","authors":"A. Augusto de Sousa ,&nbsp;Miguel Angel Guevara López ,&nbsp;Traian Lavric","doi":"10.1016/j.cag.2025.104523","DOIUrl":"10.1016/j.cag.2025.104523","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104523"},"PeriodicalIF":2.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145883787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From pseudo- to non-correspondences: Robust point cloud registration via thickness-guided self-correction 从伪对应到非对应:通过厚度引导自校正的鲁棒点云配准
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-01 Epub Date: 2026-01-02 DOI: 10.1016/j.cag.2025.104528
Yifei Tian, Xiangyu Li, Jieming Yin
Most existing point cloud registration methods heavily rely on accurate correspondences between the source and target point clouds, such as point-level or superpoint-level matches. In dense and balanced point clouds where local geometric structures are relatively complete, correspondences are easier to establish, leading to satisfactory registration performance. However, real-world point clouds can be sparse or imbalanced. The absence or inconsistency of local geometric structures makes it challenging to construct reliable correspondences, significantly degrading the performance of mainstream registration methods. To address this challenge, we propose P2NCorr, a pseudo-to-non-correspondence registration method designed for robust alignment in point clouds with missing or low-quality correspondences. Our method leverages an attention-guided soft matching module that uses self- and cross-attention mechanisms to extract contextual features and constructs pseudo correspondences under slack constraints. On this basis, we introduce a geometric consistency metric based on the thickness-guided self-correction module, which enables fine-grained alignment and optimization of micro-surfaces in the fused point cloud. This thickness evaluation serves as a supplementary supervisory signal, forming a comprehensive feedback from the post-registration fusion to the feature extraction module, thereby improving both the accuracy and stability of the registration process. Experiments conducted on public datasets such as ModelNet40 and 7Scenes demonstrate that P2NCorr achieves high-precision registration even under challenging conditions. Especially when point clouds are sparse, sampling is imbalanced, and measurements are noisy, experiments demonstrate strong robustness and promising potential.
大多数现有的点云配准方法严重依赖于源点云和目标点云之间的精确对应,如点级或超点级匹配。在密集和平衡的点云中,局部几何结构相对完整,更容易建立对应关系,从而获得满意的配准性能。然而,现实世界的点云可能是稀疏的或不平衡的。局部几何结构的缺失或不一致给构造可靠的对应关系带来了挑战,严重降低了主流配准方法的性能。为了解决这一挑战,我们提出了P2NCorr,这是一种伪到非对应配准方法,旨在对缺少或低质量对应的点云进行鲁棒对准。我们的方法利用注意引导软匹配模块,该模块使用自注意和交叉注意机制提取上下文特征,并在松弛约束下构建伪对应。在此基础上,我们引入了一种基于厚度导向自校正模块的几何一致性度量,实现了融合点云微表面的细粒度对齐和优化。这种厚度评价作为一种补充的监督信号,形成了从配准后融合到特征提取模块的综合反馈,从而提高了配准过程的准确性和稳定性。在ModelNet40和7Scenes等公共数据集上进行的实验表明,即使在具有挑战性的条件下,P2NCorr也能实现高精度配准。特别是在点云稀疏、采样不平衡和测量有噪声的情况下,实验证明了较强的鲁棒性和良好的应用前景。
{"title":"From pseudo- to non-correspondences: Robust point cloud registration via thickness-guided self-correction","authors":"Yifei Tian,&nbsp;Xiangyu Li,&nbsp;Jieming Yin","doi":"10.1016/j.cag.2025.104528","DOIUrl":"10.1016/j.cag.2025.104528","url":null,"abstract":"<div><div>Most existing point cloud registration methods heavily rely on accurate correspondences between the source and target point clouds, such as point-level or superpoint-level matches. In dense and balanced point clouds where local geometric structures are relatively complete, correspondences are easier to establish, leading to satisfactory registration performance. However, real-world point clouds can be sparse or imbalanced. The absence or inconsistency of local geometric structures makes it challenging to construct reliable correspondences, significantly degrading the performance of mainstream registration methods. To address this challenge, we propose P2NCorr, a pseudo-to-non-correspondence registration method designed for robust alignment in point clouds with missing or low-quality correspondences. Our method leverages an attention-guided soft matching module that uses self- and cross-attention mechanisms to extract contextual features and constructs pseudo correspondences under slack constraints. On this basis, we introduce a geometric consistency metric based on the thickness-guided self-correction module, which enables fine-grained alignment and optimization of micro-surfaces in the fused point cloud. This thickness evaluation serves as a supplementary supervisory signal, forming a comprehensive feedback from the post-registration fusion to the feature extraction module, thereby improving both the accuracy and stability of the registration process. Experiments conducted on public datasets such as ModelNet40 and 7Scenes demonstrate that P2NCorr achieves high-precision registration even under challenging conditions. Especially when point clouds are sparse, sampling is imbalanced, and measurements are noisy, experiments demonstrate strong robustness and promising potential.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104528"},"PeriodicalIF":2.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145938376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time haptic-based soft body suturing in virtual open surgery simulations 虚拟开放手术模拟中基于触觉的实时软体缝合
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-01 Epub Date: 2025-12-02 DOI: 10.1016/j.cag.2025.104507
George Westergaard , Mark Ellis , Jacob Barker , Sofia Garces Palacios , Alexis Desir , Ganesh Sankaranarayanan , Suvranu De , Doga Demirel
In this work, we present a real-time virtual reality-based open surgery simulator that enables realistic soft-tissue suturing with bimanual haptic feedback. Our system uses eXtended Position-Based Dynamics (XPBD) for soft body and suture thread simulation, allowing stable real-time physics for complex interactions like continuous sutures and knot tying. In tests with all four common suturing techniques, purse-string, Connell, stay, and Lembert, the simulator maintained high frame rates (50–80 FPS) with up to 4155 simulated particles, demonstrating consistent real-time performance. As part of our work, we conducted a user study using our suturing simulator, where 24 surgical trainees and experts used the Virtual Colorectal Surgery Trainer – Rectal Prolapse simulator. The user study showed that 71% of participants (n=17) rated the anatomical realism as moderate to very high. Half (n=12) found the force feedback realistic, and 54% (n=13) participants found the force feedback useful, indicating effective immersion while also highlighting the need for improved haptic fidelity. Overall, the simulation provides a low-cost, high-fidelity training platform for open surgical suturing, addressing a critical gap in current virtual reality educational tools.
在这项工作中,我们提出了一个基于实时虚拟现实的开放手术模拟器,可以通过双手触觉反馈实现真实的软组织缝合。我们的系统使用扩展的基于位置的动力学(XPBD)进行软体和缝线模拟,为连续缝合和打结等复杂的相互作用提供了稳定的实时物理效果。在对四种常用缝合技术(wallet -string、Connell、stay和Lembert)的测试中,该模拟器可以模拟4155个粒子,保持高帧率(50-80 FPS),显示出一致的实时性能。作为我们工作的一部分,我们使用我们的缝合模拟器进行了一项用户研究,其中24名外科培训生和专家使用了虚拟结直肠手术培训师-直肠脱垂模拟器。用户研究显示,71%的参与者(n=17)将解剖真实感评为中等至非常高。一半(n=12)的参与者认为力反馈是真实的,54% (n=13)的参与者认为力反馈是有用的,这表明了有效的沉浸感,同时也强调了提高触觉保真度的必要性。总体而言,该模拟为开放式手术缝合提供了低成本,高保真度的培训平台,解决了当前虚拟现实教育工具的关键空白。
{"title":"Real-time haptic-based soft body suturing in virtual open surgery simulations","authors":"George Westergaard ,&nbsp;Mark Ellis ,&nbsp;Jacob Barker ,&nbsp;Sofia Garces Palacios ,&nbsp;Alexis Desir ,&nbsp;Ganesh Sankaranarayanan ,&nbsp;Suvranu De ,&nbsp;Doga Demirel","doi":"10.1016/j.cag.2025.104507","DOIUrl":"10.1016/j.cag.2025.104507","url":null,"abstract":"<div><div>In this work, we present a real-time virtual reality-based open surgery simulator that enables realistic soft-tissue suturing with bimanual haptic feedback. Our system uses eXtended Position-Based Dynamics (XPBD) for soft body and suture thread simulation, allowing stable real-time physics for complex interactions like continuous sutures and knot tying. In tests with all four common suturing techniques, purse-string, Connell, stay, and Lembert, the simulator maintained high frame rates (50–80 FPS) with up to 4155 simulated particles, demonstrating consistent real-time performance. As part of our work, we conducted a user study using our suturing simulator, where 24 surgical trainees and experts used the Virtual Colorectal Surgery Trainer – Rectal Prolapse simulator. The user study showed that 71% of participants (n=17) rated the anatomical realism as moderate to very high. Half (n=12) found the force feedback realistic, and 54% (n=13) participants found the force feedback useful, indicating effective immersion while also highlighting the need for improved haptic fidelity. Overall, the simulation provides a low-cost, high-fidelity training platform for open surgical suturing, addressing a critical gap in current virtual reality educational tools.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104507"},"PeriodicalIF":2.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HARDER: 3D human avatar reconstruction with distillation and explicit representation 难度:具有蒸馏和显式表示的3D人类化身重建
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-01 Epub Date: 2025-12-05 DOI: 10.1016/j.cag.2025.104512
Chun-Hau Yu, Yu-Hsiang Chen, Cheng-Yen Yu, Li-Chen Fu
3D human avatar reconstruction has become a popular research field in recent years. Although many studies have shown remarkable results, most existing methods either impose overly strict data requirements, such as depth information or multi-view images, or suffer from significant performance drops in specific areas. To address these challenges, we propose HARDER. We combine the Score Distillation Sampling (SDS) technique with the designed modules, Feature-Specific Image Captioning (FSIC) and RADR (Region-Aware Differentiable Rendering), allowing the Latent Diffusion Model (LDM) to guide the reconstruction process, especially in unseen regions. Furthermore, we have developed various training strategies, including personalized LDM, delayed SDS, focused SDS, and multi-pose SDS, to make the training process more efficient.
Our avatars use an explicit representation that is compatible with modern computer graphics pipelines. Also, the entire reconstruction and real-time animation process can be completed on a single consumer-grade GPU, making this application more accessible.
近年来,三维人体虚拟形象重建已成为一个热门的研究领域。尽管许多研究已经取得了显著的成果,但现有的方法要么对数据要求过于严格,如深度信息或多视图图像,要么在特定领域性能下降明显。为了应对这些挑战,我们建议更加努力。我们将分数蒸馏采样(SDS)技术与设计的模块,特定特征图像标注(FSIC)和区域感知可微分渲染(RADR)相结合,允许潜在扩散模型(LDM)指导重建过程,特别是在看不见的区域。此外,我们还开发了多种训练策略,包括个性化LDM、延迟SDS、集中SDS和多姿态SDS,以提高训练过程的效率。我们的虚拟形象使用与现代计算机图形管道兼容的显式表示。此外,整个重建和实时动画过程可以在单个消费级GPU上完成,使此应用程序更易于访问。
{"title":"HARDER: 3D human avatar reconstruction with distillation and explicit representation","authors":"Chun-Hau Yu,&nbsp;Yu-Hsiang Chen,&nbsp;Cheng-Yen Yu,&nbsp;Li-Chen Fu","doi":"10.1016/j.cag.2025.104512","DOIUrl":"10.1016/j.cag.2025.104512","url":null,"abstract":"<div><div>3D human avatar reconstruction has become a popular research field in recent years. Although many studies have shown remarkable results, most existing methods either impose overly strict data requirements, such as depth information or multi-view images, or suffer from significant performance drops in specific areas. To address these challenges, we propose HARDER. We combine the Score Distillation Sampling (SDS) technique with the designed modules, Feature-Specific Image Captioning (FSIC) and RADR (Region-Aware Differentiable Rendering), allowing the Latent Diffusion Model (LDM) to guide the reconstruction process, especially in unseen regions. Furthermore, we have developed various training strategies, including personalized LDM, delayed SDS, focused SDS, and multi-pose SDS, to make the training process more efficient.</div><div>Our avatars use an explicit representation that is compatible with modern computer graphics pipelines. Also, the entire reconstruction and real-time animation process can be completed on a single consumer-grade GPU, making this application more accessible.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104512"},"PeriodicalIF":2.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145796668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial Note Issue 134: Advancing Graphics, Visualization, and Extended Reality 编辑说明第134期:推进图形,可视化和扩展现实
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-01 Epub Date: 2026-02-17 DOI: 10.1016/j.cag.2026.104549
Joaquim Jorge (Editor-in-Chief)
{"title":"Editorial Note Issue 134: Advancing Graphics, Visualization, and Extended Reality","authors":"Joaquim Jorge (Editor-in-Chief)","doi":"10.1016/j.cag.2026.104549","DOIUrl":"10.1016/j.cag.2026.104549","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104549"},"PeriodicalIF":2.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147394648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers & Graphics-Uk
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1