Pub Date : 2026-01-23DOI: 10.1016/j.cag.2026.104533
Andrea Giachetti, Umberto Castellani, Ariel Caputo, Valeria Garro, Nicola Capece
This Special Section contains extended and revised versions of selected papers presented at the 11th Conference on Smart Tools and Applications in Graphics (STAG 2024), held in Verona (Italy) on November 14–15, 2024. Three papers were selected by appointed members of the Program Committee; their extended versions were subsequently submitted and further reviewed by experts. The resulting collection comprises contributions spanning a broad range of topics, including navigation in mixed reality, reinforcement learning for intelligent agents in 3D environments, and interactive image relighting using neural networks.
{"title":"Foreword to the Special Section on Smart Tools and Applications in Graphics (STAG 2024)","authors":"Andrea Giachetti, Umberto Castellani, Ariel Caputo, Valeria Garro, Nicola Capece","doi":"10.1016/j.cag.2026.104533","DOIUrl":"10.1016/j.cag.2026.104533","url":null,"abstract":"<div><div>This Special Section contains extended and revised versions of selected papers presented at the 11th Conference on Smart Tools and Applications in Graphics (STAG 2024), held in Verona (Italy) on November 14–15, 2024. Three papers were selected by appointed members of the Program Committee; their extended versions were subsequently submitted and further reviewed by experts. The resulting collection comprises contributions spanning a broad range of topics, including navigation in mixed reality, reinforcement learning for intelligent agents in 3D environments, and interactive image relighting using neural networks.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104533"},"PeriodicalIF":2.8,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-22DOI: 10.1016/j.cag.2026.104536
Jaume Ros, Alessio Arleo, Fernando Paulovich
Global nonlinear Dimensionality Reduction (DR) methods excel at capturing complex features of datasets while preserving their overall high-dimensional structure when projecting them into a lower-dimensional space. Force-Scheme (FS) is one such method, used in a variety of domains. However, its use is still hindered by distortions and high computational cost. In this paper, we introduce Enhanced Force-Scheme (EFS), a revisited approach to solve the optimization problem posed by FS. We build on the core ideas of the original FS algorithm and introduce a more advanced optimization framework grounded in gradient-based optimization, which yields higher-quality layouts. Additionally, we elaborate on multiple strategies to accelerate the computation of projections using EFS, thereby facilitating its use on large datasets. Finally, we compare it with FS and other popular DR techniques and show that, among the methods tested, EFS best captures global structure while still performing well on local metrics.
{"title":"Enhanced Force-Scheme: A fast and accurate global dimensionality reduction method","authors":"Jaume Ros, Alessio Arleo, Fernando Paulovich","doi":"10.1016/j.cag.2026.104536","DOIUrl":"10.1016/j.cag.2026.104536","url":null,"abstract":"<div><div>Global nonlinear Dimensionality Reduction (DR) methods excel at capturing complex features of datasets while preserving their overall high-dimensional structure when projecting them into a lower-dimensional space. Force-Scheme (FS) is one such method, used in a variety of domains. However, its use is still hindered by distortions and high computational cost. In this paper, we introduce <em>Enhanced Force-Scheme</em> (EFS), a revisited approach to solve the optimization problem posed by FS. We build on the core ideas of the original FS algorithm and introduce a more advanced optimization framework grounded in gradient-based optimization, which yields higher-quality layouts. Additionally, we elaborate on multiple strategies to accelerate the computation of projections using EFS, thereby facilitating its use on large datasets. Finally, we compare it with FS and other popular DR techniques and show that, among the methods tested, EFS best captures global structure while still performing well on local metrics.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104536"},"PeriodicalIF":2.8,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-20DOI: 10.1016/j.cag.2026.104535
Julian Rakuschek , Helwig Hauser , Tobias Schreck
Time series in domains such as climate, traffic, and energy often contain multiple, overlapping periodic patterns. Spiral visualizations can support the exploration of such data, but their effectiveness is limited in practice. Outliers and global trends skew the color mapping, dominant periodic components can hide weaker patterns, selecting a meaningful period length is challenging, and comparing subsequences within large datasets remains cumbersome. To address these challenges, we present a guided analytical workflow centered on an enhanced time series spiral visualization. A regression model tailored to periodic data helps identify suitable period lengths and exposes secondary patterns through its residuals. Visual guidance mitigates issues caused by skewed color mappings and highlights relevant spiral sectors even when global trends or outliers are present. Users can interactively select and compare sectors based on measures of average, trend, and similarity, and examine them in linked views or a provenance dashboard, which maintains a record of all user interactions and allows comparing multiple spirals with each other. Application examples demonstrate use cases where the visual sector selection guidance together with the exploration of model residuals leads to insights. In traffic data, for instance, removing the dominant day–night rhythm reveals rush-hour effects that become visible through exploration of the residuals.
{"title":"Guided spiral visualization for periodic time series and residual analysis","authors":"Julian Rakuschek , Helwig Hauser , Tobias Schreck","doi":"10.1016/j.cag.2026.104535","DOIUrl":"10.1016/j.cag.2026.104535","url":null,"abstract":"<div><div>Time series in domains such as climate, traffic, and energy often contain multiple, overlapping periodic patterns. Spiral visualizations can support the exploration of such data, but their effectiveness is limited in practice. Outliers and global trends skew the color mapping, dominant periodic components can hide weaker patterns, selecting a meaningful period length is challenging, and comparing subsequences within large datasets remains cumbersome. To address these challenges, we present a guided analytical workflow centered on an enhanced time series spiral visualization. A regression model tailored to periodic data helps identify suitable period lengths and exposes secondary patterns through its residuals. Visual guidance mitigates issues caused by skewed color mappings and highlights relevant spiral sectors even when global trends or outliers are present. Users can interactively select and compare sectors based on measures of average, trend, and similarity, and examine them in linked views or a provenance dashboard, which maintains a record of all user interactions and allows comparing multiple spirals with each other. Application examples demonstrate use cases where the visual sector selection guidance together with the exploration of model residuals leads to insights. In traffic data, for instance, removing the dominant day–night rhythm reveals rush-hour effects that become visible through exploration of the residuals.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104535"},"PeriodicalIF":2.8,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-19DOI: 10.1016/j.cag.2026.104534
Hui Wang, Ming Li, QingYue Wei
Accurate normal vector estimation for scattered point clouds is a fundamental and challenging task in three-dimensional reconstruction. We introduce a novel framework that integrates curvature-aware spherical fitting with robust kernel regression to estimate reliable and consistently oriented normal vectors. Our approach explicitly models local geometry using spherical surfaces, enabling precise capture of geometric details in high-variability regions, including sharp features and high-curvature areas. The kernel regression mechanism adaptively weights neighboring points based on spatial proximity and geometric consistency, effectively suppressing the effects of noise, outliers, and non-uniform sampling. We further propose a variational model that combines local geometric constraints with global propagation to ensure orientation consistency across the entire point cloud data. Extensive experiments demonstrated that our method can effectively handle challenging conditions, including noise, outliers, surfaces in close proximity, non-uniform sampling, and sharp features, achieving superior accuracy and robustness compared with existing approaches.
{"title":"Consistent orientation normal vector estimation for scattered point cloud","authors":"Hui Wang, Ming Li, QingYue Wei","doi":"10.1016/j.cag.2026.104534","DOIUrl":"10.1016/j.cag.2026.104534","url":null,"abstract":"<div><div>Accurate normal vector estimation for scattered point clouds is a fundamental and challenging task in three-dimensional reconstruction. We introduce a novel framework that integrates curvature-aware spherical fitting with robust kernel regression to estimate reliable and consistently oriented normal vectors. Our approach explicitly models local geometry using spherical surfaces, enabling precise capture of geometric details in high-variability regions, including sharp features and high-curvature areas. The kernel regression mechanism adaptively weights neighboring points based on spatial proximity and geometric consistency, effectively suppressing the effects of noise, outliers, and non-uniform sampling. We further propose a variational model that combines local geometric constraints with global propagation to ensure orientation consistency across the entire point cloud data. Extensive experiments demonstrated that our method can effectively handle challenging conditions, including noise, outliers, surfaces in close proximity, non-uniform sampling, and sharp features, achieving superior accuracy and robustness compared with existing approaches.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104534"},"PeriodicalIF":2.8,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-17DOI: 10.1016/j.cag.2026.104532
Zohar Levi
Incompressibility is a fundamental condition in most fluid models. Accumulation of simulation errors violates it and causes fluid volume loss. Prior work has proposed correction methods to combat this drift, but they remain approximate and can fail in extreme scenarios. We present a particle-in-cell method that strictly enforces a grid-based definition of discrete incompressibility at every time step.
We formulate a linear programming (LP) problem that bounds the number of particles that end up in each grid cell. To scale this to large 3D domains, we introduce a narrow-band variant with specialized band-interface constraints to ensure volume preservation. Further acceleration is achieved by simplifying the problem and adding a band-specific correction step that is formulated as a minimum-cost flow problem (MCFP).
We also address coupling with moving solids by incorporating obstacle-aware penalties directly into our optimization. In extreme test scenes, we demonstrate strict volume preservation and robust behavior where state-of-the-art methods exhibit noticeable volume drift or artifacts.
{"title":"Cell-constrained particles for incompressible fluids","authors":"Zohar Levi","doi":"10.1016/j.cag.2026.104532","DOIUrl":"10.1016/j.cag.2026.104532","url":null,"abstract":"<div><div>Incompressibility is a fundamental condition in most fluid models. Accumulation of simulation errors violates it and causes fluid volume loss. Prior work has proposed correction methods to combat this drift, but they remain approximate and can fail in extreme scenarios. We present a particle-in-cell method that <em>strictly</em> enforces a grid-based definition of discrete incompressibility at every time step.</div><div>We formulate a linear programming (LP) problem that bounds the number of particles that end up in each grid cell. To scale this to large 3D domains, we introduce a narrow-band variant with specialized band-interface constraints to ensure volume preservation. Further acceleration is achieved by simplifying the problem and adding a band-specific correction step that is formulated as a minimum-cost flow problem (MCFP).</div><div>We also address coupling with moving solids by incorporating obstacle-aware penalties directly into our optimization. In extreme test scenes, we demonstrate strict volume preservation and robust behavior where state-of-the-art methods exhibit noticeable volume drift or artifacts.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104532"},"PeriodicalIF":2.8,"publicationDate":"2026-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-05DOI: 10.1016/j.cag.2025.104529
Xiaoqun Wu, Tian Yang, Liu Yu, Jian Cao, Huiling Si
To address the issue of blurry artifacts in texture mapping for 3D reconstruction, we propose an innovative approach that optimizes textures based on semantic-aware similarity. Unlike previous algorithms that require significant computational costs, our method introduces a novel metric that provides a more efficient solution for texture mapping. This allows for high-quality texture mapping in 3D reconstructions using multi-view captured images. Our approach begins by establishing mapping within the image sequence using the available 3D information. We then quantitatively assess pixel similarity using our proposed semantic-aware metric, which guides the texture image generation process. By leveraging semantic-aware similarity, we constrain texture mapping and enhance texture clarity. Finally, the texture image is projected onto the geometry to produce a 3D textured mesh. Experimental results conclusively demonstrate that our method can generate 3D meshes with crisp, high-fidelity textures faster than existing methods, even in scenarios involving substantial camera pose errors and low-precision reconstruction geometry.
{"title":"Efficient semantic-aware texture optimization for 3D scene reconstruction","authors":"Xiaoqun Wu, Tian Yang, Liu Yu, Jian Cao, Huiling Si","doi":"10.1016/j.cag.2025.104529","DOIUrl":"10.1016/j.cag.2025.104529","url":null,"abstract":"<div><div>To address the issue of blurry artifacts in texture mapping for 3D reconstruction, we propose an innovative approach that optimizes textures based on semantic-aware similarity. Unlike previous algorithms that require significant computational costs, our method introduces a novel metric that provides a more efficient solution for texture mapping. This allows for high-quality texture mapping in 3D reconstructions using multi-view captured images. Our approach begins by establishing mapping within the image sequence using the available 3D information. We then quantitatively assess pixel similarity using our proposed semantic-aware metric, which guides the texture image generation process. By leveraging semantic-aware similarity, we constrain texture mapping and enhance texture clarity. Finally, the texture image is projected onto the geometry to produce a 3D textured mesh. Experimental results conclusively demonstrate that our method can generate 3D meshes with crisp, high-fidelity textures faster than existing methods, even in scenarios involving substantial camera pose errors and low-precision reconstruction geometry.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104529"},"PeriodicalIF":2.8,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145938375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-02DOI: 10.1016/j.cag.2025.104528
Yifei Tian, Xiangyu Li, Jieming Yin
Most existing point cloud registration methods heavily rely on accurate correspondences between the source and target point clouds, such as point-level or superpoint-level matches. In dense and balanced point clouds where local geometric structures are relatively complete, correspondences are easier to establish, leading to satisfactory registration performance. However, real-world point clouds can be sparse or imbalanced. The absence or inconsistency of local geometric structures makes it challenging to construct reliable correspondences, significantly degrading the performance of mainstream registration methods. To address this challenge, we propose P2NCorr, a pseudo-to-non-correspondence registration method designed for robust alignment in point clouds with missing or low-quality correspondences. Our method leverages an attention-guided soft matching module that uses self- and cross-attention mechanisms to extract contextual features and constructs pseudo correspondences under slack constraints. On this basis, we introduce a geometric consistency metric based on the thickness-guided self-correction module, which enables fine-grained alignment and optimization of micro-surfaces in the fused point cloud. This thickness evaluation serves as a supplementary supervisory signal, forming a comprehensive feedback from the post-registration fusion to the feature extraction module, thereby improving both the accuracy and stability of the registration process. Experiments conducted on public datasets such as ModelNet40 and 7Scenes demonstrate that P2NCorr achieves high-precision registration even under challenging conditions. Especially when point clouds are sparse, sampling is imbalanced, and measurements are noisy, experiments demonstrate strong robustness and promising potential.
{"title":"From pseudo- to non-correspondences: Robust point cloud registration via thickness-guided self-correction","authors":"Yifei Tian, Xiangyu Li, Jieming Yin","doi":"10.1016/j.cag.2025.104528","DOIUrl":"10.1016/j.cag.2025.104528","url":null,"abstract":"<div><div>Most existing point cloud registration methods heavily rely on accurate correspondences between the source and target point clouds, such as point-level or superpoint-level matches. In dense and balanced point clouds where local geometric structures are relatively complete, correspondences are easier to establish, leading to satisfactory registration performance. However, real-world point clouds can be sparse or imbalanced. The absence or inconsistency of local geometric structures makes it challenging to construct reliable correspondences, significantly degrading the performance of mainstream registration methods. To address this challenge, we propose P2NCorr, a pseudo-to-non-correspondence registration method designed for robust alignment in point clouds with missing or low-quality correspondences. Our method leverages an attention-guided soft matching module that uses self- and cross-attention mechanisms to extract contextual features and constructs pseudo correspondences under slack constraints. On this basis, we introduce a geometric consistency metric based on the thickness-guided self-correction module, which enables fine-grained alignment and optimization of micro-surfaces in the fused point cloud. This thickness evaluation serves as a supplementary supervisory signal, forming a comprehensive feedback from the post-registration fusion to the feature extraction module, thereby improving both the accuracy and stability of the registration process. Experiments conducted on public datasets such as ModelNet40 and 7Scenes demonstrate that P2NCorr achieves high-precision registration even under challenging conditions. Especially when point clouds are sparse, sampling is imbalanced, and measurements are noisy, experiments demonstrate strong robustness and promising potential.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104528"},"PeriodicalIF":2.8,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145938376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-23DOI: 10.1016/j.cag.2025.104524
Lei He , Mingbo Hu , Wenli Xiu , Hongyu Wu , Siming Zheng , Shuai Li , Qian Dong , Aimin Hao
Haptic-based surgical simulation is widely utilized for training surgical skills. However, simulating the interaction between rigid surgical instruments and soft tissues presents significant technical challenges. In this paper, we propose an energy-based haptic rendering method to achieve both large deformations and rigid–soft haptic interaction. Different from existing methods, both the rigid tools and soft tissues are modeled by an energy-based virtual coupling system. The constraints of soft deformation, tool-object interaction and haptic rendering are defined by potential energy. Benefit from energy-based constraints, we can realize complex surgical operations, such as inserting tools into soft tissue. The virtual coupling of soft tissue enables the separation of haptic interaction into two components: soft deformation with high computational complexity, and high-frequency haptic rendering. The soft deformation with shape constraints is accelerated GPU at a relatively low frequency(60Hz 100Hz), while the haptic rendering runs in another thread at a high frequency ( 1000Hz). We have implemented haptic simulation for two commonly used surgical operations, pressing and pulling. The experimental results show that our method can achieve stable feedback force and non-penetration between the tool and soft tissue under the condition of large soft deformation.
{"title":"Energy-based haptic rendering for real-time surgical simulation","authors":"Lei He , Mingbo Hu , Wenli Xiu , Hongyu Wu , Siming Zheng , Shuai Li , Qian Dong , Aimin Hao","doi":"10.1016/j.cag.2025.104524","DOIUrl":"10.1016/j.cag.2025.104524","url":null,"abstract":"<div><div>Haptic-based surgical simulation is widely utilized for training surgical skills. However, simulating the interaction between rigid surgical instruments and soft tissues presents significant technical challenges. In this paper, we propose an energy-based haptic rendering method to achieve both large deformations and rigid–soft haptic interaction. Different from existing methods, both the rigid tools and soft tissues are modeled by an energy-based virtual coupling system. The constraints of soft deformation, tool-object interaction and haptic rendering are defined by potential energy. Benefit from energy-based constraints, we can realize complex surgical operations, such as inserting tools into soft tissue. The virtual coupling of soft tissue enables the separation of haptic interaction into two components: soft deformation with high computational complexity, and high-frequency haptic rendering. The soft deformation with shape constraints is accelerated GPU at a relatively low frequency(60Hz <span><math><mo>∼</mo></math></span> 100Hz), while the haptic rendering runs in another thread at a high frequency (<span><math><mo>≥</mo></math></span> 1000Hz). We have implemented haptic simulation for two commonly used surgical operations, pressing and pulling. The experimental results show that our method can achieve stable feedback force and non-penetration between the tool and soft tissue under the condition of large soft deformation.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104524"},"PeriodicalIF":2.8,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}