Pub Date : 2026-04-01Epub Date: 2026-01-20DOI: 10.1016/j.cag.2026.104535
Julian Rakuschek , Helwig Hauser , Tobias Schreck
Time series in domains such as climate, traffic, and energy often contain multiple, overlapping periodic patterns. Spiral visualizations can support the exploration of such data, but their effectiveness is limited in practice. Outliers and global trends skew the color mapping, dominant periodic components can hide weaker patterns, selecting a meaningful period length is challenging, and comparing subsequences within large datasets remains cumbersome. To address these challenges, we present a guided analytical workflow centered on an enhanced time series spiral visualization. A regression model tailored to periodic data helps identify suitable period lengths and exposes secondary patterns through its residuals. Visual guidance mitigates issues caused by skewed color mappings and highlights relevant spiral sectors even when global trends or outliers are present. Users can interactively select and compare sectors based on measures of average, trend, and similarity, and examine them in linked views or a provenance dashboard, which maintains a record of all user interactions and allows comparing multiple spirals with each other. Application examples demonstrate use cases where the visual sector selection guidance together with the exploration of model residuals leads to insights. In traffic data, for instance, removing the dominant day–night rhythm reveals rush-hour effects that become visible through exploration of the residuals.
{"title":"Guided spiral visualization for periodic time series and residual analysis","authors":"Julian Rakuschek , Helwig Hauser , Tobias Schreck","doi":"10.1016/j.cag.2026.104535","DOIUrl":"10.1016/j.cag.2026.104535","url":null,"abstract":"<div><div>Time series in domains such as climate, traffic, and energy often contain multiple, overlapping periodic patterns. Spiral visualizations can support the exploration of such data, but their effectiveness is limited in practice. Outliers and global trends skew the color mapping, dominant periodic components can hide weaker patterns, selecting a meaningful period length is challenging, and comparing subsequences within large datasets remains cumbersome. To address these challenges, we present a guided analytical workflow centered on an enhanced time series spiral visualization. A regression model tailored to periodic data helps identify suitable period lengths and exposes secondary patterns through its residuals. Visual guidance mitigates issues caused by skewed color mappings and highlights relevant spiral sectors even when global trends or outliers are present. Users can interactively select and compare sectors based on measures of average, trend, and similarity, and examine them in linked views or a provenance dashboard, which maintains a record of all user interactions and allows comparing multiple spirals with each other. Application examples demonstrate use cases where the visual sector selection guidance together with the exploration of model residuals leads to insights. In traffic data, for instance, removing the dominant day–night rhythm reveals rush-hour effects that become visible through exploration of the residuals.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104535"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-02-17DOI: 10.1016/j.cag.2026.104551
Ramesh Ashok Tabib , Dikshit Hegde , Uma Mudenagudi
In this paper, we propose RIFLe-Net, a novel Rotation-Invariant Feature Learning Network for affordance detection in 3D point clouds. Affordance detection is the process of identifying the potential interactions with the object based on features such as shape, structure, and orientation. Affordance detection is meaningful in 3D, as it leverages depth and spatial relationships absent in 2D. 3D point clouds effectively capture geometric structures for affordance detection, but their unstructured, high-dimensional, and orientation-sensitive nature demands rotation-invariant representations and semantic features to identify functional regions beyond raw geometry. To address this, we propose RIFLe-Net and include an Invariant Feature Extractor to generate rotation-invariant representations and a Point Perception Encoder to extract perception-aware features, enabling semantic understanding. In particular, Invariant Feature Extractor projects the input point cloud into its invariant representation using Intrinsic Invariant Projection, and aligns the object in its canonical form to extract a global signature of the input point cloud. Point Perception Encoder captures perception-aware semantic features by integrating local geometry and semantic cues at different levels of abstraction using Semantic Latent Encoder (SLE). At every level of abstraction, we propose Neighborhood Feature Extractor to capture local geometric information and Adaptive EdgeConv to enable semantic information in SLE. Additionally, we employ a Point Affordance Estimator to establish the mapping between multiple affordances to a point under consideration based on extracted perception-aware semantic features. We demonstrate the effectiveness of RIFLe-Net through extensive experiments on affordance detection using the 3D Affordance dataset with various rotations and compare the results with state-of-the-art methods.
{"title":"RIFLe-Net: Rotation Invariant Feature Learning Network towards affordance detection in 3D point clouds","authors":"Ramesh Ashok Tabib , Dikshit Hegde , Uma Mudenagudi","doi":"10.1016/j.cag.2026.104551","DOIUrl":"10.1016/j.cag.2026.104551","url":null,"abstract":"<div><div>In this paper, we propose RIFLe-Net, a novel Rotation-Invariant Feature Learning Network for affordance detection in 3D point clouds. Affordance detection is the process of identifying the potential interactions with the object based on features such as shape, structure, and orientation. Affordance detection is meaningful in 3D, as it leverages depth and spatial relationships absent in 2D. 3D point clouds effectively capture geometric structures for affordance detection, but their unstructured, high-dimensional, and orientation-sensitive nature demands rotation-invariant representations and semantic features to identify functional regions beyond raw geometry. To address this, we propose RIFLe-Net and include an Invariant Feature Extractor to generate rotation-invariant representations and a Point Perception Encoder to extract perception-aware features, enabling semantic understanding. In particular, Invariant Feature Extractor projects the input point cloud into its invariant representation using Intrinsic Invariant Projection, and aligns the object in its canonical form to extract a global signature of the input point cloud. Point Perception Encoder captures perception-aware semantic features by integrating local geometry and semantic cues at different levels of abstraction using Semantic Latent Encoder (SLE). At every level of abstraction, we propose Neighborhood Feature Extractor to capture local geometric information and Adaptive EdgeConv to enable semantic information in SLE. Additionally, we employ a Point Affordance Estimator to establish the mapping between multiple affordances to a point under consideration based on extracted perception-aware semantic features. We demonstrate the effectiveness of RIFLe-Net through extensive experiments on affordance detection using the 3D Affordance dataset with various rotations and compare the results with state-of-the-art methods.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104551"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-02-16DOI: 10.1016/j.cag.2026.104537
Lucas Joos, Daniel A. Keim, Maximilian T. Fischer
The creation of systematic literature reviews (SLR) is critical for analyzing the landscape of a research field and guiding future research directions. However, retrieving and filtering the literature corpus for an SLR is highly time-consuming and requires extensive manual effort, as keyword-based searches in digital libraries often return numerous irrelevant publications. In this work, we propose a pipeline leveraging multiple large language models (LLMs), classifying papers based on descriptive prompts and deciding jointly using a consensus scheme. The entire process is human-supervised and interactively controlled via our open-source visual analytics web interface, LLMSurver, which enables real-time inspection and modification of model outputs. We evaluate our approach using ground-truth data from a recent SLR comprising 8323 candidate papers, benchmarking both open and commercial state-of-the-art LLMs from mid-2024 and fall 2025. Results demonstrate that our pipeline significantly reduces manual effort while achieving lower error rates than single human annotators. Furthermore, modern open-source models prove sufficient for this task, making the method accessible and cost-effective. Overall, our work demonstrates how responsible human–AI collaboration can accelerate and enhance systematic literature reviews within academic workflows.
{"title":"Leveraging LLMs for semi-automatic corpus filtration in systematic literature reviews","authors":"Lucas Joos, Daniel A. Keim, Maximilian T. Fischer","doi":"10.1016/j.cag.2026.104537","DOIUrl":"10.1016/j.cag.2026.104537","url":null,"abstract":"<div><div>The creation of systematic literature reviews (SLR) is critical for analyzing the landscape of a research field and guiding future research directions. However, retrieving and filtering the literature corpus for an SLR is highly time-consuming and requires extensive manual effort, as keyword-based searches in digital libraries often return numerous irrelevant publications. In this work, we propose a pipeline leveraging multiple large language models (LLMs), classifying papers based on descriptive prompts and deciding jointly using a consensus scheme. The entire process is human-supervised and interactively controlled via our open-source visual analytics web interface, LLMSurver, which enables real-time inspection and modification of model outputs. We evaluate our approach using ground-truth data from a recent SLR comprising 8323 candidate papers, benchmarking both open and commercial state-of-the-art LLMs from mid-2024 and fall 2025. Results demonstrate that our pipeline significantly reduces manual effort while achieving lower error rates than single human annotators. Furthermore, modern open-source models prove sufficient for this task, making the method accessible and cost-effective. Overall, our work demonstrates how responsible human–AI collaboration can accelerate and enhance systematic literature reviews within academic workflows.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104537"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-02-12DOI: 10.1016/j.cag.2026.104542
Ioannis Pratikakis, Niloy Mitra, Paul Guerrero, Remco Veltkamp
{"title":"Foreword to the special section on 3D object retrieval 2025 Symposium (3DOR2025)","authors":"Ioannis Pratikakis, Niloy Mitra, Paul Guerrero, Remco Veltkamp","doi":"10.1016/j.cag.2026.104542","DOIUrl":"10.1016/j.cag.2026.104542","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104542"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-02-05DOI: 10.1016/j.cag.2026.104541
Allison Bayro , Shannon P.D. McGarry , Rebecca NeSmith , Joseph T. Coyne , Heejin Jeong
Accurate assessment of situation awareness (SA) and spatial ability (SpA) is critical in aviation, yet SA tools often interrupt tasks or offer limited temporal resolution, while SpA measures frequently rely on static 2D stimuli with low ecological validity. These limitations call for approaches that capture both abilities in realistic, dynamic contexts. Virtual reality (VR) offers this capability by enabling immersive 3D navigation while simultaneously recording performance and multimodal data. However, VR-based assessments must consider user-experience factors such as workload, affect, and simulator sickness, which can influence performance and the interpretability of assessment outcomes. Building on the preliminary study presented at IEEE VR’s Workshop on the eXtended Reality for Industrial and Occupational Supports, this paper describes the design of an immersive flight-navigation task that assesses SA and SpA, called Assessing Spatial Abilities in Naval Aviation (ASANA). We report user-experience outcomes from 106 U.S. Navy students, showing moderate workload, positive valence, near-neutral arousal, and slightly positive dominance. Simulator sickness increased from pre- to post-exposure, but post-exposure medians remained low, indicating generally mild symptoms. The correlation results showed that ASANA navigation efficiency aligned with established desktop-based SpA metrics. In addition, higher freeze-probe SA accuracy was associated with more efficient performance on an embedded, SME-informed in-scenario SA metric. Together, these findings support ASANA as a tolerable, interpretable VR platform for studying SpA and SA in ecologically-grounded context, and motivate future work that leverages synchronized multimodal sensing to model SA dynamics and SpA–SA interactions.
{"title":"From design to user experience: The creation and assessment of ASANA, an immersive VR task for situation awareness and spatial ability","authors":"Allison Bayro , Shannon P.D. McGarry , Rebecca NeSmith , Joseph T. Coyne , Heejin Jeong","doi":"10.1016/j.cag.2026.104541","DOIUrl":"10.1016/j.cag.2026.104541","url":null,"abstract":"<div><div>Accurate assessment of situation awareness (SA) and spatial ability (SpA) is critical in aviation, yet SA tools often interrupt tasks or offer limited temporal resolution, while SpA measures frequently rely on static 2D stimuli with low ecological validity. These limitations call for approaches that capture both abilities in realistic, dynamic contexts. Virtual reality (VR) offers this capability by enabling immersive 3D navigation while simultaneously recording performance and multimodal data. However, VR-based assessments must consider user-experience factors such as workload, affect, and simulator sickness, which can influence performance and the interpretability of assessment outcomes. Building on the preliminary study presented at IEEE VR’s Workshop on the eXtended Reality for Industrial and Occupational Supports, this paper describes the design of an immersive flight-navigation task that assesses SA and SpA, called Assessing Spatial Abilities in Naval Aviation (<em>ASANA</em>). We report user-experience outcomes from 106 U.S. Navy students, showing moderate workload, positive valence, near-neutral arousal, and slightly positive dominance. Simulator sickness increased from pre- to post-exposure, but post-exposure medians remained low, indicating generally mild symptoms. The correlation results showed that <em>ASANA</em> navigation efficiency aligned with established desktop-based SpA metrics. In addition, higher freeze-probe SA accuracy was associated with more efficient performance on an embedded, SME-informed in-scenario SA metric. Together, these findings support <em>ASANA</em> as a tolerable, interpretable VR platform for studying SpA and SA in ecologically-grounded context, and motivate future work that leverages synchronized multimodal sensing to model SA dynamics and SpA–SA interactions.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104541"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-02-16DOI: 10.1016/j.cag.2026.104548
Jun Cui , Zeyu Li , Yuxiao Li , Ziheng Guo , Ziming Dai , Jiawan Zhang
Existing Poisson-disk sampling methods struggle to simultaneously preserve Poisson-disk properties in a controllable and unified framework. We therefore propose a spatial covering model based on constrained cells. This model maintains both minimum distance and maximal coverage properties within each cell, and meticulously constructs them in an consequent and single-pass manner, while allowing for flexible control over sample density and a smooth trade-off between noise and aliasing of blue noise distribution. Under the guidance of the geometric model, we propose a simple Poisson-disk sampling method via circle packing to generate high-quality samples with extreme efficiency. The initial sampling in our method yields a distribution with extremely high spatial coverage, allowing the extraction of gap primitives to be disregarded in scenarios that do not need to satisfy the maximal coverage strictly. We extend our method to adaptive sampling of arbitrary density functions in linear time. Experimental results demonstrate our method’s efficiency and its ability to generate blue noise samples compared to the state-of-the-art approaches. Application results are presented in image stippling and surface remeshing.
{"title":"Single pass Poisson disk sampling via circle packing","authors":"Jun Cui , Zeyu Li , Yuxiao Li , Ziheng Guo , Ziming Dai , Jiawan Zhang","doi":"10.1016/j.cag.2026.104548","DOIUrl":"10.1016/j.cag.2026.104548","url":null,"abstract":"<div><div>Existing Poisson-disk sampling methods struggle to simultaneously preserve Poisson-disk properties in a controllable and unified framework. We therefore propose a spatial covering model based on constrained cells. This model maintains both minimum distance and maximal coverage properties within each cell, and meticulously constructs them in an consequent and single-pass manner, while allowing for flexible control over sample density and a smooth trade-off between noise and aliasing of blue noise distribution. Under the guidance of the geometric model, we propose a simple Poisson-disk sampling method via circle packing to generate high-quality samples with extreme efficiency. The initial sampling in our method yields a distribution with extremely high spatial coverage, allowing the extraction of gap primitives to be disregarded in scenarios that do not need to satisfy the maximal coverage strictly. We extend our method to adaptive sampling of arbitrary density functions in linear time. Experimental results demonstrate our method’s efficiency and its ability to generate blue noise samples compared to the state-of-the-art approaches. Application results are presented in image stippling and surface remeshing.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104548"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-02-17DOI: 10.1016/j.cag.2026.104552
Frederik L. Dennig , Daniela Blumberg , Nina Geyer , Yannick Metz
Neural networks are used to create parametric and invertible multidimensional data projections. In this context, parametric projections enable the embedding of previously unseen data points without requiring a complete recomputation of the projection, while invertible projections allow for the reconstruction or generation of data in the original space. In this paper, we investigate the use of autoencoder (AE) architectures for simultaneously learning parametric and inverse mappings independent of the underlying dimensionality reduction method. We introduce and compare three regularization methods for autoencoder architectures designed to learn a forward mapping into two-dimensional space induced by the projection as well as inverse mappings back into the original feature space. To evaluate their performance, we conduct a systematic study on six datasets of varying dimensionality and structural complexity, using the established projection techniques t-SNE and UMAP as training targets. Our evaluation combines both quantitative metrics and qualitative assessments. The results demonstrate that AEs, particularly when trained with Kullback–Leibler divergence regularization, can achieve high-quality reconstructions while providing users with control over the degree of smoothing in the projection. Compared to disjoint neural networks, AE architectures yield superior generative capabilities for out-of-distribution samples, while still providing comparable reconstruction quality and parametric projection accuracy. This highlights their potential for interactive data generation in use cases such as classifier evaluation and counterfactual creation.
{"title":"Autoencoder-based regularization methods for parametric and inverse projections","authors":"Frederik L. Dennig , Daniela Blumberg , Nina Geyer , Yannick Metz","doi":"10.1016/j.cag.2026.104552","DOIUrl":"10.1016/j.cag.2026.104552","url":null,"abstract":"<div><div>Neural networks are used to create parametric and invertible multidimensional data projections. In this context, parametric projections enable the embedding of previously unseen data points without requiring a complete recomputation of the projection, while invertible projections allow for the reconstruction or generation of data in the original space. In this paper, we investigate the use of autoencoder (AE) architectures for simultaneously learning parametric and inverse mappings independent of the underlying dimensionality reduction method. We introduce and compare three regularization methods for autoencoder architectures designed to learn a forward mapping into two-dimensional space induced by the projection as well as inverse mappings back into the original feature space. To evaluate their performance, we conduct a systematic study on six datasets of varying dimensionality and structural complexity, using the established projection techniques t-SNE and UMAP as training targets. Our evaluation combines both quantitative metrics and qualitative assessments. The results demonstrate that AEs, particularly when trained with Kullback–Leibler divergence regularization, can achieve high-quality reconstructions while providing users with control over the degree of smoothing in the projection. Compared to disjoint neural networks, AE architectures yield superior generative capabilities for out-of-distribution samples, while still providing comparable reconstruction quality and parametric projection accuracy. This highlights their potential for interactive data generation in use cases such as classifier evaluation and counterfactual creation.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104552"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2026-01-05DOI: 10.1016/j.cag.2025.104529
Xiaoqun Wu, Tian Yang, Liu Yu, Jian Cao, Huiling Si
To address the issue of blurry artifacts in texture mapping for 3D reconstruction, we propose an innovative approach that optimizes textures based on semantic-aware similarity. Unlike previous algorithms that require significant computational costs, our method introduces a novel metric that provides a more efficient solution for texture mapping. This allows for high-quality texture mapping in 3D reconstructions using multi-view captured images. Our approach begins by establishing mapping within the image sequence using the available 3D information. We then quantitatively assess pixel similarity using our proposed semantic-aware metric, which guides the texture image generation process. By leveraging semantic-aware similarity, we constrain texture mapping and enhance texture clarity. Finally, the texture image is projected onto the geometry to produce a 3D textured mesh. Experimental results conclusively demonstrate that our method can generate 3D meshes with crisp, high-fidelity textures faster than existing methods, even in scenarios involving substantial camera pose errors and low-precision reconstruction geometry.
{"title":"Efficient semantic-aware texture optimization for 3D scene reconstruction","authors":"Xiaoqun Wu, Tian Yang, Liu Yu, Jian Cao, Huiling Si","doi":"10.1016/j.cag.2025.104529","DOIUrl":"10.1016/j.cag.2025.104529","url":null,"abstract":"<div><div>To address the issue of blurry artifacts in texture mapping for 3D reconstruction, we propose an innovative approach that optimizes textures based on semantic-aware similarity. Unlike previous algorithms that require significant computational costs, our method introduces a novel metric that provides a more efficient solution for texture mapping. This allows for high-quality texture mapping in 3D reconstructions using multi-view captured images. Our approach begins by establishing mapping within the image sequence using the available 3D information. We then quantitatively assess pixel similarity using our proposed semantic-aware metric, which guides the texture image generation process. By leveraging semantic-aware similarity, we constrain texture mapping and enhance texture clarity. Finally, the texture image is projected onto the geometry to produce a 3D textured mesh. Experimental results conclusively demonstrate that our method can generate 3D meshes with crisp, high-fidelity textures faster than existing methods, even in scenarios involving substantial camera pose errors and low-precision reconstruction geometry.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104529"},"PeriodicalIF":2.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145938375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}