首页 > 最新文献

Computers & Graphics-Uk最新文献

英文 中文
Foreword to the Special Section on Smart Tools and Applications in Graphics (STAG 2024) 关于图形中的智能工具和应用的特别部分(STAG 2024)前言
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-23 DOI: 10.1016/j.cag.2026.104533
Andrea Giachetti, Umberto Castellani, Ariel Caputo, Valeria Garro, Nicola Capece
This Special Section contains extended and revised versions of selected papers presented at the 11th Conference on Smart Tools and Applications in Graphics (STAG 2024), held in Verona (Italy) on November 14–15, 2024. Three papers were selected by appointed members of the Program Committee; their extended versions were subsequently submitted and further reviewed by experts. The resulting collection comprises contributions spanning a broad range of topics, including navigation in mixed reality, reinforcement learning for intelligent agents in 3D environments, and interactive image relighting using neural networks.
本特别部分包含在2024年11月14日至15日在维罗纳(意大利)举行的第11届图形智能工具和应用会议(STAG 2024)上发表的选定论文的扩展和修订版本。三篇论文由项目委员会指定成员选出;随后提交了其扩展版本,并由专家进一步审查。由此产生的集合包括涵盖广泛主题的贡献,包括混合现实中的导航,3D环境中智能代理的强化学习以及使用神经网络的交互式图像重照明。
{"title":"Foreword to the Special Section on Smart Tools and Applications in Graphics (STAG 2024)","authors":"Andrea Giachetti,&nbsp;Umberto Castellani,&nbsp;Ariel Caputo,&nbsp;Valeria Garro,&nbsp;Nicola Capece","doi":"10.1016/j.cag.2026.104533","DOIUrl":"10.1016/j.cag.2026.104533","url":null,"abstract":"<div><div>This Special Section contains extended and revised versions of selected papers presented at the 11th Conference on Smart Tools and Applications in Graphics (STAG 2024), held in Verona (Italy) on November 14–15, 2024. Three papers were selected by appointed members of the Program Committee; their extended versions were subsequently submitted and further reviewed by experts. The resulting collection comprises contributions spanning a broad range of topics, including navigation in mixed reality, reinforcement learning for intelligent agents in 3D environments, and interactive image relighting using neural networks.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104533"},"PeriodicalIF":2.8,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Force-Scheme: A fast and accurate global dimensionality reduction method 增强力方案:一种快速、准确的全局降维方法
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-22 DOI: 10.1016/j.cag.2026.104536
Jaume Ros, Alessio Arleo, Fernando Paulovich
Global nonlinear Dimensionality Reduction (DR) methods excel at capturing complex features of datasets while preserving their overall high-dimensional structure when projecting them into a lower-dimensional space. Force-Scheme (FS) is one such method, used in a variety of domains. However, its use is still hindered by distortions and high computational cost. In this paper, we introduce Enhanced Force-Scheme (EFS), a revisited approach to solve the optimization problem posed by FS. We build on the core ideas of the original FS algorithm and introduce a more advanced optimization framework grounded in gradient-based optimization, which yields higher-quality layouts. Additionally, we elaborate on multiple strategies to accelerate the computation of projections using EFS, thereby facilitating its use on large datasets. Finally, we compare it with FS and other popular DR techniques and show that, among the methods tested, EFS best captures global structure while still performing well on local metrics.
全局非线性降维(DR)方法擅长捕获数据集的复杂特征,同时在将其投影到低维空间时保留其整体高维结构。力-方案(FS)就是这样一种方法,用于各种领域。然而,它的使用仍然受到扭曲和高计算成本的阻碍。在本文中,我们介绍了增强力方案(Enhanced Force-Scheme, EFS),这是一种重新设计的解决FS所带来的优化问题的方法。我们以原始FS算法的核心思想为基础,引入了基于梯度优化的更高级的优化框架,从而产生更高质量的布局。此外,我们详细阐述了使用EFS加速预测计算的多种策略,从而促进了其在大型数据集上的使用。最后,我们将其与FS和其他流行的DR技术进行了比较,并表明,在测试的方法中,EFS最好地捕获了全局结构,同时在局部指标上仍然表现良好。
{"title":"Enhanced Force-Scheme: A fast and accurate global dimensionality reduction method","authors":"Jaume Ros,&nbsp;Alessio Arleo,&nbsp;Fernando Paulovich","doi":"10.1016/j.cag.2026.104536","DOIUrl":"10.1016/j.cag.2026.104536","url":null,"abstract":"<div><div>Global nonlinear Dimensionality Reduction (DR) methods excel at capturing complex features of datasets while preserving their overall high-dimensional structure when projecting them into a lower-dimensional space. Force-Scheme (FS) is one such method, used in a variety of domains. However, its use is still hindered by distortions and high computational cost. In this paper, we introduce <em>Enhanced Force-Scheme</em> (EFS), a revisited approach to solve the optimization problem posed by FS. We build on the core ideas of the original FS algorithm and introduce a more advanced optimization framework grounded in gradient-based optimization, which yields higher-quality layouts. Additionally, we elaborate on multiple strategies to accelerate the computation of projections using EFS, thereby facilitating its use on large datasets. Finally, we compare it with FS and other popular DR techniques and show that, among the methods tested, EFS best captures global structure while still performing well on local metrics.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104536"},"PeriodicalIF":2.8,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Guided spiral visualization for periodic time series and residual analysis 导向螺旋可视化周期时间序列和残差分析
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-20 DOI: 10.1016/j.cag.2026.104535
Julian Rakuschek , Helwig Hauser , Tobias Schreck
Time series in domains such as climate, traffic, and energy often contain multiple, overlapping periodic patterns. Spiral visualizations can support the exploration of such data, but their effectiveness is limited in practice. Outliers and global trends skew the color mapping, dominant periodic components can hide weaker patterns, selecting a meaningful period length is challenging, and comparing subsequences within large datasets remains cumbersome. To address these challenges, we present a guided analytical workflow centered on an enhanced time series spiral visualization. A regression model tailored to periodic data helps identify suitable period lengths and exposes secondary patterns through its residuals. Visual guidance mitigates issues caused by skewed color mappings and highlights relevant spiral sectors even when global trends or outliers are present. Users can interactively select and compare sectors based on measures of average, trend, and similarity, and examine them in linked views or a provenance dashboard, which maintains a record of all user interactions and allows comparing multiple spirals with each other. Application examples demonstrate use cases where the visual sector selection guidance together with the exploration of model residuals leads to insights. In traffic data, for instance, removing the dominant day–night rhythm reveals rush-hour effects that become visible through exploration of the residuals.
气候、交通和能源等领域的时间序列通常包含多个重叠的周期模式。螺旋可视化可以支持对这些数据的探索,但在实践中它们的有效性是有限的。异常值和全球趋势扭曲了颜色映射,主要的周期成分可以隐藏较弱的模式,选择有意义的周期长度是具有挑战性的,并且在大型数据集中比较子序列仍然很麻烦。为了解决这些挑战,我们提出了一个以增强时间序列螺旋可视化为中心的指导性分析工作流。为周期性数据量身定制的回归模型有助于确定合适的周期长度,并通过其残差暴露次要模式。视觉引导减轻了由倾斜的颜色映射引起的问题,即使在全球趋势或异常值存在的情况下,也突出了相关的螺旋扇区。用户可以根据平均值、趋势和相似度的度量交互式地选择和比较扇区,并在链接视图或来源仪表板中检查它们,该仪表板保留了所有用户交互的记录,并允许相互比较多个螺旋。应用程序示例演示了用例,其中视觉扇区选择指导和模型残差的探索导致了洞察力。例如,在交通数据中,除去占主导地位的昼夜节奏,通过对残差的探索,可以看到高峰时段的影响。
{"title":"Guided spiral visualization for periodic time series and residual analysis","authors":"Julian Rakuschek ,&nbsp;Helwig Hauser ,&nbsp;Tobias Schreck","doi":"10.1016/j.cag.2026.104535","DOIUrl":"10.1016/j.cag.2026.104535","url":null,"abstract":"<div><div>Time series in domains such as climate, traffic, and energy often contain multiple, overlapping periodic patterns. Spiral visualizations can support the exploration of such data, but their effectiveness is limited in practice. Outliers and global trends skew the color mapping, dominant periodic components can hide weaker patterns, selecting a meaningful period length is challenging, and comparing subsequences within large datasets remains cumbersome. To address these challenges, we present a guided analytical workflow centered on an enhanced time series spiral visualization. A regression model tailored to periodic data helps identify suitable period lengths and exposes secondary patterns through its residuals. Visual guidance mitigates issues caused by skewed color mappings and highlights relevant spiral sectors even when global trends or outliers are present. Users can interactively select and compare sectors based on measures of average, trend, and similarity, and examine them in linked views or a provenance dashboard, which maintains a record of all user interactions and allows comparing multiple spirals with each other. Application examples demonstrate use cases where the visual sector selection guidance together with the exploration of model residuals leads to insights. In traffic data, for instance, removing the dominant day–night rhythm reveals rush-hour effects that become visible through exploration of the residuals.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104535"},"PeriodicalIF":2.8,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Consistent orientation normal vector estimation for scattered point cloud 散射点云的一致方向法向量估计
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-19 DOI: 10.1016/j.cag.2026.104534
Hui Wang, Ming Li, QingYue Wei
Accurate normal vector estimation for scattered point clouds is a fundamental and challenging task in three-dimensional reconstruction. We introduce a novel framework that integrates curvature-aware spherical fitting with robust kernel regression to estimate reliable and consistently oriented normal vectors. Our approach explicitly models local geometry using spherical surfaces, enabling precise capture of geometric details in high-variability regions, including sharp features and high-curvature areas. The kernel regression mechanism adaptively weights neighboring points based on spatial proximity and geometric consistency, effectively suppressing the effects of noise, outliers, and non-uniform sampling. We further propose a variational model that combines local geometric constraints with global propagation to ensure orientation consistency across the entire point cloud data. Extensive experiments demonstrated that our method can effectively handle challenging conditions, including noise, outliers, surfaces in close proximity, non-uniform sampling, and sharp features, achieving superior accuracy and robustness compared with existing approaches.
散射点云的准确法向量估计是三维重建的基础和难点。我们引入了一种新的框架,该框架将曲率感知球面拟合与鲁棒核回归相结合,以估计可靠且一致定向的法向量。我们的方法使用球面明确地模拟局部几何形状,从而能够精确捕获高可变性区域的几何细节,包括尖锐特征和高曲率区域。核回归机制基于空间接近性和几何一致性自适应加权相邻点,有效抑制噪声、异常值和非均匀采样的影响。我们进一步提出了一种结合局部几何约束和全局传播的变分模型,以确保整个点云数据的方向一致性。大量的实验表明,我们的方法可以有效地处理具有挑战性的条件,包括噪声、异常值、近距离表面、非均匀采样和尖锐特征,与现有方法相比,具有更高的准确性和鲁棒性。
{"title":"Consistent orientation normal vector estimation for scattered point cloud","authors":"Hui Wang,&nbsp;Ming Li,&nbsp;QingYue Wei","doi":"10.1016/j.cag.2026.104534","DOIUrl":"10.1016/j.cag.2026.104534","url":null,"abstract":"<div><div>Accurate normal vector estimation for scattered point clouds is a fundamental and challenging task in three-dimensional reconstruction. We introduce a novel framework that integrates curvature-aware spherical fitting with robust kernel regression to estimate reliable and consistently oriented normal vectors. Our approach explicitly models local geometry using spherical surfaces, enabling precise capture of geometric details in high-variability regions, including sharp features and high-curvature areas. The kernel regression mechanism adaptively weights neighboring points based on spatial proximity and geometric consistency, effectively suppressing the effects of noise, outliers, and non-uniform sampling. We further propose a variational model that combines local geometric constraints with global propagation to ensure orientation consistency across the entire point cloud data. Extensive experiments demonstrated that our method can effectively handle challenging conditions, including noise, outliers, surfaces in close proximity, non-uniform sampling, and sharp features, achieving superior accuracy and robustness compared with existing approaches.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104534"},"PeriodicalIF":2.8,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cell-constrained particles for incompressible fluids 不可压缩流体的细胞约束粒子
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-17 DOI: 10.1016/j.cag.2026.104532
Zohar Levi
Incompressibility is a fundamental condition in most fluid models. Accumulation of simulation errors violates it and causes fluid volume loss. Prior work has proposed correction methods to combat this drift, but they remain approximate and can fail in extreme scenarios. We present a particle-in-cell method that strictly enforces a grid-based definition of discrete incompressibility at every time step.
We formulate a linear programming (LP) problem that bounds the number of particles that end up in each grid cell. To scale this to large 3D domains, we introduce a narrow-band variant with specialized band-interface constraints to ensure volume preservation. Further acceleration is achieved by simplifying the problem and adding a band-specific correction step that is formulated as a minimum-cost flow problem (MCFP).
We also address coupling with moving solids by incorporating obstacle-aware penalties directly into our optimization. In extreme test scenes, we demonstrate strict volume preservation and robust behavior where state-of-the-art methods exhibit noticeable volume drift or artifacts.
不可压缩性是大多数流体模型的基本条件。模拟误差的累积违背了这一原则,并导致流体体积损失。先前的工作已经提出了纠正方法来对抗这种漂移,但它们仍然是近似的,并且在极端情况下可能会失败。我们提出了一种细胞内粒子的方法,该方法严格执行基于网格的离散不可压缩性定义。我们制定了一个线性规划(LP)问题,该问题限制了最终进入每个网格单元的粒子数量。为了将其扩展到大型3D域,我们引入了具有专用带接口约束的窄带变体,以确保体积保存。通过简化问题并添加特定波段的校正步骤(即最小成本流问题(MCFP)),进一步加快了速度。我们还通过将障碍物感知惩罚直接纳入优化中来解决与移动固体的耦合问题。在极端的测试场景中,我们展示了严格的体积保存和健壮的行为,其中最先进的方法表现出明显的体积漂移或伪影。
{"title":"Cell-constrained particles for incompressible fluids","authors":"Zohar Levi","doi":"10.1016/j.cag.2026.104532","DOIUrl":"10.1016/j.cag.2026.104532","url":null,"abstract":"<div><div>Incompressibility is a fundamental condition in most fluid models. Accumulation of simulation errors violates it and causes fluid volume loss. Prior work has proposed correction methods to combat this drift, but they remain approximate and can fail in extreme scenarios. We present a particle-in-cell method that <em>strictly</em> enforces a grid-based definition of discrete incompressibility at every time step.</div><div>We formulate a linear programming (LP) problem that bounds the number of particles that end up in each grid cell. To scale this to large 3D domains, we introduce a narrow-band variant with specialized band-interface constraints to ensure volume preservation. Further acceleration is achieved by simplifying the problem and adding a band-specific correction step that is formulated as a minimum-cost flow problem (MCFP).</div><div>We also address coupling with moving solids by incorporating obstacle-aware penalties directly into our optimization. In extreme test scenes, we demonstrate strict volume preservation and robust behavior where state-of-the-art methods exhibit noticeable volume drift or artifacts.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104532"},"PeriodicalIF":2.8,"publicationDate":"2026-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foreword to the Computer Graphics & Visual Computing conference 2024 special section 计算机图形学与视觉计算会议2024特别部分前言
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-13 DOI: 10.1016/j.cag.2026.104530
Aidan Slingsby, Mai Elshehaly, Kai Xu
{"title":"Foreword to the Computer Graphics & Visual Computing conference 2024 special section","authors":"Aidan Slingsby,&nbsp;Mai Elshehaly,&nbsp;Kai Xu","doi":"10.1016/j.cag.2026.104530","DOIUrl":"10.1016/j.cag.2026.104530","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104530"},"PeriodicalIF":2.8,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Preface to the Special Section: ACM MIG 2024 特别部分序言:ACM米格2024
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-08 DOI: 10.1016/j.cag.2026.104531
Soraia Raupp Musse, Sheldon Andrews
{"title":"Preface to the Special Section: ACM MIG 2024","authors":"Soraia Raupp Musse,&nbsp;Sheldon Andrews","doi":"10.1016/j.cag.2026.104531","DOIUrl":"10.1016/j.cag.2026.104531","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104531"},"PeriodicalIF":2.8,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient semantic-aware texture optimization for 3D scene reconstruction 面向3D场景重建的高效语义感知纹理优化
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-05 DOI: 10.1016/j.cag.2025.104529
Xiaoqun Wu, Tian Yang, Liu Yu, Jian Cao, Huiling Si
To address the issue of blurry artifacts in texture mapping for 3D reconstruction, we propose an innovative approach that optimizes textures based on semantic-aware similarity. Unlike previous algorithms that require significant computational costs, our method introduces a novel metric that provides a more efficient solution for texture mapping. This allows for high-quality texture mapping in 3D reconstructions using multi-view captured images. Our approach begins by establishing mapping within the image sequence using the available 3D information. We then quantitatively assess pixel similarity using our proposed semantic-aware metric, which guides the texture image generation process. By leveraging semantic-aware similarity, we constrain texture mapping and enhance texture clarity. Finally, the texture image is projected onto the geometry to produce a 3D textured mesh. Experimental results conclusively demonstrate that our method can generate 3D meshes with crisp, high-fidelity textures faster than existing methods, even in scenarios involving substantial camera pose errors and low-precision reconstruction geometry.
为了解决三维重建中纹理映射中的模糊工件问题,我们提出了一种基于语义感知相似性的纹理优化方法。与以往需要大量计算成本的算法不同,我们的方法引入了一种新的度量,为纹理映射提供了更有效的解决方案。这允许使用多视图捕获的图像在3D重建中进行高质量的纹理映射。我们的方法首先使用可用的3D信息在图像序列中建立映射。然后,我们使用我们提出的语义感知度量定量评估像素相似性,该度量指导纹理图像生成过程。通过利用语义感知相似度,我们约束了纹理映射,增强了纹理清晰度。最后,将纹理图像投影到几何体上,生成三维纹理网格。实验结果表明,该方法可以比现有方法更快地生成具有清晰,高保真纹理的3D网格,即使在涉及大量相机姿态误差和低精度重建几何的情况下也是如此。
{"title":"Efficient semantic-aware texture optimization for 3D scene reconstruction","authors":"Xiaoqun Wu,&nbsp;Tian Yang,&nbsp;Liu Yu,&nbsp;Jian Cao,&nbsp;Huiling Si","doi":"10.1016/j.cag.2025.104529","DOIUrl":"10.1016/j.cag.2025.104529","url":null,"abstract":"<div><div>To address the issue of blurry artifacts in texture mapping for 3D reconstruction, we propose an innovative approach that optimizes textures based on semantic-aware similarity. Unlike previous algorithms that require significant computational costs, our method introduces a novel metric that provides a more efficient solution for texture mapping. This allows for high-quality texture mapping in 3D reconstructions using multi-view captured images. Our approach begins by establishing mapping within the image sequence using the available 3D information. We then quantitatively assess pixel similarity using our proposed semantic-aware metric, which guides the texture image generation process. By leveraging semantic-aware similarity, we constrain texture mapping and enhance texture clarity. Finally, the texture image is projected onto the geometry to produce a 3D textured mesh. Experimental results conclusively demonstrate that our method can generate 3D meshes with crisp, high-fidelity textures faster than existing methods, even in scenarios involving substantial camera pose errors and low-precision reconstruction geometry.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104529"},"PeriodicalIF":2.8,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145938375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From pseudo- to non-correspondences: Robust point cloud registration via thickness-guided self-correction 从伪对应到非对应:通过厚度引导自校正的鲁棒点云配准
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-01-02 DOI: 10.1016/j.cag.2025.104528
Yifei Tian, Xiangyu Li, Jieming Yin
Most existing point cloud registration methods heavily rely on accurate correspondences between the source and target point clouds, such as point-level or superpoint-level matches. In dense and balanced point clouds where local geometric structures are relatively complete, correspondences are easier to establish, leading to satisfactory registration performance. However, real-world point clouds can be sparse or imbalanced. The absence or inconsistency of local geometric structures makes it challenging to construct reliable correspondences, significantly degrading the performance of mainstream registration methods. To address this challenge, we propose P2NCorr, a pseudo-to-non-correspondence registration method designed for robust alignment in point clouds with missing or low-quality correspondences. Our method leverages an attention-guided soft matching module that uses self- and cross-attention mechanisms to extract contextual features and constructs pseudo correspondences under slack constraints. On this basis, we introduce a geometric consistency metric based on the thickness-guided self-correction module, which enables fine-grained alignment and optimization of micro-surfaces in the fused point cloud. This thickness evaluation serves as a supplementary supervisory signal, forming a comprehensive feedback from the post-registration fusion to the feature extraction module, thereby improving both the accuracy and stability of the registration process. Experiments conducted on public datasets such as ModelNet40 and 7Scenes demonstrate that P2NCorr achieves high-precision registration even under challenging conditions. Especially when point clouds are sparse, sampling is imbalanced, and measurements are noisy, experiments demonstrate strong robustness and promising potential.
大多数现有的点云配准方法严重依赖于源点云和目标点云之间的精确对应,如点级或超点级匹配。在密集和平衡的点云中,局部几何结构相对完整,更容易建立对应关系,从而获得满意的配准性能。然而,现实世界的点云可能是稀疏的或不平衡的。局部几何结构的缺失或不一致给构造可靠的对应关系带来了挑战,严重降低了主流配准方法的性能。为了解决这一挑战,我们提出了P2NCorr,这是一种伪到非对应配准方法,旨在对缺少或低质量对应的点云进行鲁棒对准。我们的方法利用注意引导软匹配模块,该模块使用自注意和交叉注意机制提取上下文特征,并在松弛约束下构建伪对应。在此基础上,我们引入了一种基于厚度导向自校正模块的几何一致性度量,实现了融合点云微表面的细粒度对齐和优化。这种厚度评价作为一种补充的监督信号,形成了从配准后融合到特征提取模块的综合反馈,从而提高了配准过程的准确性和稳定性。在ModelNet40和7Scenes等公共数据集上进行的实验表明,即使在具有挑战性的条件下,P2NCorr也能实现高精度配准。特别是在点云稀疏、采样不平衡和测量有噪声的情况下,实验证明了较强的鲁棒性和良好的应用前景。
{"title":"From pseudo- to non-correspondences: Robust point cloud registration via thickness-guided self-correction","authors":"Yifei Tian,&nbsp;Xiangyu Li,&nbsp;Jieming Yin","doi":"10.1016/j.cag.2025.104528","DOIUrl":"10.1016/j.cag.2025.104528","url":null,"abstract":"<div><div>Most existing point cloud registration methods heavily rely on accurate correspondences between the source and target point clouds, such as point-level or superpoint-level matches. In dense and balanced point clouds where local geometric structures are relatively complete, correspondences are easier to establish, leading to satisfactory registration performance. However, real-world point clouds can be sparse or imbalanced. The absence or inconsistency of local geometric structures makes it challenging to construct reliable correspondences, significantly degrading the performance of mainstream registration methods. To address this challenge, we propose P2NCorr, a pseudo-to-non-correspondence registration method designed for robust alignment in point clouds with missing or low-quality correspondences. Our method leverages an attention-guided soft matching module that uses self- and cross-attention mechanisms to extract contextual features and constructs pseudo correspondences under slack constraints. On this basis, we introduce a geometric consistency metric based on the thickness-guided self-correction module, which enables fine-grained alignment and optimization of micro-surfaces in the fused point cloud. This thickness evaluation serves as a supplementary supervisory signal, forming a comprehensive feedback from the post-registration fusion to the feature extraction module, thereby improving both the accuracy and stability of the registration process. Experiments conducted on public datasets such as ModelNet40 and 7Scenes demonstrate that P2NCorr achieves high-precision registration even under challenging conditions. Especially when point clouds are sparse, sampling is imbalanced, and measurements are noisy, experiments demonstrate strong robustness and promising potential.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104528"},"PeriodicalIF":2.8,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145938376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy-based haptic rendering for real-time surgical simulation 基于能量的实时外科模拟触觉渲染
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-23 DOI: 10.1016/j.cag.2025.104524
Lei He , Mingbo Hu , Wenli Xiu , Hongyu Wu , Siming Zheng , Shuai Li , Qian Dong , Aimin Hao
Haptic-based surgical simulation is widely utilized for training surgical skills. However, simulating the interaction between rigid surgical instruments and soft tissues presents significant technical challenges. In this paper, we propose an energy-based haptic rendering method to achieve both large deformations and rigid–soft haptic interaction. Different from existing methods, both the rigid tools and soft tissues are modeled by an energy-based virtual coupling system. The constraints of soft deformation, tool-object interaction and haptic rendering are defined by potential energy. Benefit from energy-based constraints, we can realize complex surgical operations, such as inserting tools into soft tissue. The virtual coupling of soft tissue enables the separation of haptic interaction into two components: soft deformation with high computational complexity, and high-frequency haptic rendering. The soft deformation with shape constraints is accelerated GPU at a relatively low frequency(60Hz 100Hz), while the haptic rendering runs in another thread at a high frequency ( 1000Hz). We have implemented haptic simulation for two commonly used surgical operations, pressing and pulling. The experimental results show that our method can achieve stable feedback force and non-penetration between the tool and soft tissue under the condition of large soft deformation.
基于触觉的手术模拟被广泛应用于外科技能的训练。然而,模拟刚性手术器械和软组织之间的相互作用提出了重大的技术挑战。在本文中,我们提出了一种基于能量的触觉渲染方法,以实现大变形和刚软触觉交互。与现有方法不同,该方法采用基于能量的虚拟耦合系统对刚性工具和软组织进行建模。软变形约束、工具-物体交互约束和触觉渲染约束由势能定义。得益于基于能量的约束,我们可以实现复杂的外科手术,例如将工具插入软组织。软组织的虚拟耦合使触觉交互分离为两个组成部分:具有高计算复杂度的软变形和高频触觉渲染。具有形状约束的软变形以相对较低的频率(60Hz ~ 100Hz)在GPU上加速,而触觉渲染以较高的频率(≥1000Hz)在另一个线程中运行。我们已经实现了两种常用的外科手术触觉模拟,按压和拉动。实验结果表明,在软变形较大的情况下,该方法可以实现刀具与软组织之间稳定的反馈力和不侵彻。
{"title":"Energy-based haptic rendering for real-time surgical simulation","authors":"Lei He ,&nbsp;Mingbo Hu ,&nbsp;Wenli Xiu ,&nbsp;Hongyu Wu ,&nbsp;Siming Zheng ,&nbsp;Shuai Li ,&nbsp;Qian Dong ,&nbsp;Aimin Hao","doi":"10.1016/j.cag.2025.104524","DOIUrl":"10.1016/j.cag.2025.104524","url":null,"abstract":"<div><div>Haptic-based surgical simulation is widely utilized for training surgical skills. However, simulating the interaction between rigid surgical instruments and soft tissues presents significant technical challenges. In this paper, we propose an energy-based haptic rendering method to achieve both large deformations and rigid–soft haptic interaction. Different from existing methods, both the rigid tools and soft tissues are modeled by an energy-based virtual coupling system. The constraints of soft deformation, tool-object interaction and haptic rendering are defined by potential energy. Benefit from energy-based constraints, we can realize complex surgical operations, such as inserting tools into soft tissue. The virtual coupling of soft tissue enables the separation of haptic interaction into two components: soft deformation with high computational complexity, and high-frequency haptic rendering. The soft deformation with shape constraints is accelerated GPU at a relatively low frequency(60Hz <span><math><mo>∼</mo></math></span> 100Hz), while the haptic rendering runs in another thread at a high frequency (<span><math><mo>≥</mo></math></span> 1000Hz). We have implemented haptic simulation for two commonly used surgical operations, pressing and pulling. The experimental results show that our method can achieve stable feedback force and non-penetration between the tool and soft tissue under the condition of large soft deformation.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104524"},"PeriodicalIF":2.8,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers & Graphics-Uk
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1