首页 > 最新文献

Computers & Graphics-Uk最新文献

英文 中文
Guided spiral visualization for periodic time series and residual analysis 导向螺旋可视化周期时间序列和残差分析
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-04-01 Epub Date: 2026-01-20 DOI: 10.1016/j.cag.2026.104535
Julian Rakuschek , Helwig Hauser , Tobias Schreck
Time series in domains such as climate, traffic, and energy often contain multiple, overlapping periodic patterns. Spiral visualizations can support the exploration of such data, but their effectiveness is limited in practice. Outliers and global trends skew the color mapping, dominant periodic components can hide weaker patterns, selecting a meaningful period length is challenging, and comparing subsequences within large datasets remains cumbersome. To address these challenges, we present a guided analytical workflow centered on an enhanced time series spiral visualization. A regression model tailored to periodic data helps identify suitable period lengths and exposes secondary patterns through its residuals. Visual guidance mitigates issues caused by skewed color mappings and highlights relevant spiral sectors even when global trends or outliers are present. Users can interactively select and compare sectors based on measures of average, trend, and similarity, and examine them in linked views or a provenance dashboard, which maintains a record of all user interactions and allows comparing multiple spirals with each other. Application examples demonstrate use cases where the visual sector selection guidance together with the exploration of model residuals leads to insights. In traffic data, for instance, removing the dominant day–night rhythm reveals rush-hour effects that become visible through exploration of the residuals.
气候、交通和能源等领域的时间序列通常包含多个重叠的周期模式。螺旋可视化可以支持对这些数据的探索,但在实践中它们的有效性是有限的。异常值和全球趋势扭曲了颜色映射,主要的周期成分可以隐藏较弱的模式,选择有意义的周期长度是具有挑战性的,并且在大型数据集中比较子序列仍然很麻烦。为了解决这些挑战,我们提出了一个以增强时间序列螺旋可视化为中心的指导性分析工作流。为周期性数据量身定制的回归模型有助于确定合适的周期长度,并通过其残差暴露次要模式。视觉引导减轻了由倾斜的颜色映射引起的问题,即使在全球趋势或异常值存在的情况下,也突出了相关的螺旋扇区。用户可以根据平均值、趋势和相似度的度量交互式地选择和比较扇区,并在链接视图或来源仪表板中检查它们,该仪表板保留了所有用户交互的记录,并允许相互比较多个螺旋。应用程序示例演示了用例,其中视觉扇区选择指导和模型残差的探索导致了洞察力。例如,在交通数据中,除去占主导地位的昼夜节奏,通过对残差的探索,可以看到高峰时段的影响。
{"title":"Guided spiral visualization for periodic time series and residual analysis","authors":"Julian Rakuschek ,&nbsp;Helwig Hauser ,&nbsp;Tobias Schreck","doi":"10.1016/j.cag.2026.104535","DOIUrl":"10.1016/j.cag.2026.104535","url":null,"abstract":"<div><div>Time series in domains such as climate, traffic, and energy often contain multiple, overlapping periodic patterns. Spiral visualizations can support the exploration of such data, but their effectiveness is limited in practice. Outliers and global trends skew the color mapping, dominant periodic components can hide weaker patterns, selecting a meaningful period length is challenging, and comparing subsequences within large datasets remains cumbersome. To address these challenges, we present a guided analytical workflow centered on an enhanced time series spiral visualization. A regression model tailored to periodic data helps identify suitable period lengths and exposes secondary patterns through its residuals. Visual guidance mitigates issues caused by skewed color mappings and highlights relevant spiral sectors even when global trends or outliers are present. Users can interactively select and compare sectors based on measures of average, trend, and similarity, and examine them in linked views or a provenance dashboard, which maintains a record of all user interactions and allows comparing multiple spirals with each other. Application examples demonstrate use cases where the visual sector selection guidance together with the exploration of model residuals leads to insights. In traffic data, for instance, removing the dominant day–night rhythm reveals rush-hour effects that become visible through exploration of the residuals.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104535"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RIFLe-Net: Rotation Invariant Feature Learning Network towards affordance detection in 3D point clouds 面向三维点云可视性检测的旋转不变特征学习网络
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-04-01 Epub Date: 2026-02-17 DOI: 10.1016/j.cag.2026.104551
Ramesh Ashok Tabib , Dikshit Hegde , Uma Mudenagudi
In this paper, we propose RIFLe-Net, a novel Rotation-Invariant Feature Learning Network for affordance detection in 3D point clouds. Affordance detection is the process of identifying the potential interactions with the object based on features such as shape, structure, and orientation. Affordance detection is meaningful in 3D, as it leverages depth and spatial relationships absent in 2D. 3D point clouds effectively capture geometric structures for affordance detection, but their unstructured, high-dimensional, and orientation-sensitive nature demands rotation-invariant representations and semantic features to identify functional regions beyond raw geometry. To address this, we propose RIFLe-Net and include an Invariant Feature Extractor to generate rotation-invariant representations and a Point Perception Encoder to extract perception-aware features, enabling semantic understanding. In particular, Invariant Feature Extractor projects the input point cloud into its invariant representation using Intrinsic Invariant Projection, and aligns the object in its canonical form to extract a global signature of the input point cloud. Point Perception Encoder captures perception-aware semantic features by integrating local geometry and semantic cues at different levels of abstraction using Semantic Latent Encoder (SLE). At every level of abstraction, we propose Neighborhood Feature Extractor to capture local geometric information and Adaptive EdgeConv to enable semantic information in SLE. Additionally, we employ a Point Affordance Estimator to establish the mapping between multiple affordances to a point under consideration based on extracted perception-aware semantic features. We demonstrate the effectiveness of RIFLe-Net through extensive experiments on affordance detection using the 3D Affordance dataset with various rotations and compare the results with state-of-the-art methods.
本文提出了一种新的旋转不变特征学习网络RIFLe-Net,用于三维点云的可视性检测。功能检测是根据物体的形状、结构和方向等特征,识别物体与物体之间潜在的相互作用的过程。可视性检测在3D中是有意义的,因为它利用了2D中缺乏的深度和空间关系。3D点云可以有效地捕获几何结构进行功能检测,但其非结构化、高维和方向敏感的特性需要旋转不变的表示和语义特征来识别原始几何之外的功能区域。为了解决这个问题,我们提出了RIFLe-Net,并包括一个不变特征提取器来生成旋转不变表示和一个点感知编码器来提取感知感知特征,从而实现语义理解。其中,不变量特征提取器使用内不变投影将输入点云投影到其不变量表示中,并将对象对齐到其规范形式中以提取输入点云的全局签名。点感知编码器通过使用语义潜在编码器(SLE)集成局部几何和不同抽象层次的语义线索来捕获感知感知的语义特征。在每个抽象层次上,我们提出了邻域特征提取器来捕获局部几何信息,并提出了自适应EdgeConv来实现SLE中的语义信息。此外,基于提取的感知感知语义特征,我们使用点可视性估计器来建立多个可视性到考虑的点之间的映射。我们通过使用具有各种旋转的3D可视性数据集进行可视性检测的大量实验来证明RIFLe-Net的有效性,并将结果与最先进的方法进行比较。
{"title":"RIFLe-Net: Rotation Invariant Feature Learning Network towards affordance detection in 3D point clouds","authors":"Ramesh Ashok Tabib ,&nbsp;Dikshit Hegde ,&nbsp;Uma Mudenagudi","doi":"10.1016/j.cag.2026.104551","DOIUrl":"10.1016/j.cag.2026.104551","url":null,"abstract":"<div><div>In this paper, we propose RIFLe-Net, a novel Rotation-Invariant Feature Learning Network for affordance detection in 3D point clouds. Affordance detection is the process of identifying the potential interactions with the object based on features such as shape, structure, and orientation. Affordance detection is meaningful in 3D, as it leverages depth and spatial relationships absent in 2D. 3D point clouds effectively capture geometric structures for affordance detection, but their unstructured, high-dimensional, and orientation-sensitive nature demands rotation-invariant representations and semantic features to identify functional regions beyond raw geometry. To address this, we propose RIFLe-Net and include an Invariant Feature Extractor to generate rotation-invariant representations and a Point Perception Encoder to extract perception-aware features, enabling semantic understanding. In particular, Invariant Feature Extractor projects the input point cloud into its invariant representation using Intrinsic Invariant Projection, and aligns the object in its canonical form to extract a global signature of the input point cloud. Point Perception Encoder captures perception-aware semantic features by integrating local geometry and semantic cues at different levels of abstraction using Semantic Latent Encoder (SLE). At every level of abstraction, we propose Neighborhood Feature Extractor to capture local geometric information and Adaptive EdgeConv to enable semantic information in SLE. Additionally, we employ a Point Affordance Estimator to establish the mapping between multiple affordances to a point under consideration based on extracted perception-aware semantic features. We demonstrate the effectiveness of RIFLe-Net through extensive experiments on affordance detection using the 3D Affordance dataset with various rotations and compare the results with state-of-the-art methods.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104551"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging LLMs for semi-automatic corpus filtration in systematic literature reviews 利用法学硕士在系统文献综述中进行半自动语料库过滤
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-04-01 Epub Date: 2026-02-16 DOI: 10.1016/j.cag.2026.104537
Lucas Joos, Daniel A. Keim, Maximilian T. Fischer
The creation of systematic literature reviews (SLR) is critical for analyzing the landscape of a research field and guiding future research directions. However, retrieving and filtering the literature corpus for an SLR is highly time-consuming and requires extensive manual effort, as keyword-based searches in digital libraries often return numerous irrelevant publications. In this work, we propose a pipeline leveraging multiple large language models (LLMs), classifying papers based on descriptive prompts and deciding jointly using a consensus scheme. The entire process is human-supervised and interactively controlled via our open-source visual analytics web interface, LLMSurver, which enables real-time inspection and modification of model outputs. We evaluate our approach using ground-truth data from a recent SLR comprising 8323 candidate papers, benchmarking both open and commercial state-of-the-art LLMs from mid-2024 and fall 2025. Results demonstrate that our pipeline significantly reduces manual effort while achieving lower error rates than single human annotators. Furthermore, modern open-source models prove sufficient for this task, making the method accessible and cost-effective. Overall, our work demonstrates how responsible human–AI collaboration can accelerate and enhance systematic literature reviews within academic workflows.
系统文献综述(SLR)的创建对于分析研究领域的景观和指导未来的研究方向至关重要。然而,为单反检索和过滤文献语料库非常耗时,并且需要大量的手工工作,因为在数字图书馆中基于关键字的搜索通常会返回许多不相关的出版物。在这项工作中,我们提出了一个利用多个大型语言模型(llm)的管道,基于描述性提示对论文进行分类,并使用共识方案共同决定。整个过程由人工监督,并通过我们的开源可视化分析网络界面LLMSurver进行交互式控制,该界面可以实时检查和修改模型输出。我们使用来自最近的SLR(包括8323篇候选论文)的真实数据来评估我们的方法,这些数据包括2024年中期到2025年秋季的开放和商业最先进的法学硕士。结果表明,我们的管道显著减少了人工工作量,同时实现了比单个人工注释器更低的错误率。此外,现代开源模型证明足以完成这项任务,使该方法易于使用且具有成本效益。总体而言,我们的工作表明,负责任的人类与人工智能协作可以加速和加强学术工作流程中的系统文献综述。
{"title":"Leveraging LLMs for semi-automatic corpus filtration in systematic literature reviews","authors":"Lucas Joos,&nbsp;Daniel A. Keim,&nbsp;Maximilian T. Fischer","doi":"10.1016/j.cag.2026.104537","DOIUrl":"10.1016/j.cag.2026.104537","url":null,"abstract":"<div><div>The creation of systematic literature reviews (SLR) is critical for analyzing the landscape of a research field and guiding future research directions. However, retrieving and filtering the literature corpus for an SLR is highly time-consuming and requires extensive manual effort, as keyword-based searches in digital libraries often return numerous irrelevant publications. In this work, we propose a pipeline leveraging multiple large language models (LLMs), classifying papers based on descriptive prompts and deciding jointly using a consensus scheme. The entire process is human-supervised and interactively controlled via our open-source visual analytics web interface, LLMSurver, which enables real-time inspection and modification of model outputs. We evaluate our approach using ground-truth data from a recent SLR comprising 8323 candidate papers, benchmarking both open and commercial state-of-the-art LLMs from mid-2024 and fall 2025. Results demonstrate that our pipeline significantly reduces manual effort while achieving lower error rates than single human annotators. Furthermore, modern open-source models prove sufficient for this task, making the method accessible and cost-effective. Overall, our work demonstrates how responsible human–AI collaboration can accelerate and enhance systematic literature reviews within academic workflows.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104537"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foreword to the special section on 3D object retrieval 2025 Symposium (3DOR2025) 三维目标检索2025专题研讨会(3DOR2025)引言
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-04-01 Epub Date: 2026-02-12 DOI: 10.1016/j.cag.2026.104542
Ioannis Pratikakis, Niloy Mitra, Paul Guerrero, Remco Veltkamp
{"title":"Foreword to the special section on 3D object retrieval 2025 Symposium (3DOR2025)","authors":"Ioannis Pratikakis,&nbsp;Niloy Mitra,&nbsp;Paul Guerrero,&nbsp;Remco Veltkamp","doi":"10.1016/j.cag.2026.104542","DOIUrl":"10.1016/j.cag.2026.104542","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104542"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From design to user experience: The creation and assessment of ASANA, an immersive VR task for situation awareness and spatial ability 从设计到用户体验:ASANA的创建和评估,这是一个沉浸式VR任务,用于情境感知和空间能力
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-04-01 Epub Date: 2026-02-05 DOI: 10.1016/j.cag.2026.104541
Allison Bayro , Shannon P.D. McGarry , Rebecca NeSmith , Joseph T. Coyne , Heejin Jeong
Accurate assessment of situation awareness (SA) and spatial ability (SpA) is critical in aviation, yet SA tools often interrupt tasks or offer limited temporal resolution, while SpA measures frequently rely on static 2D stimuli with low ecological validity. These limitations call for approaches that capture both abilities in realistic, dynamic contexts. Virtual reality (VR) offers this capability by enabling immersive 3D navigation while simultaneously recording performance and multimodal data. However, VR-based assessments must consider user-experience factors such as workload, affect, and simulator sickness, which can influence performance and the interpretability of assessment outcomes. Building on the preliminary study presented at IEEE VR’s Workshop on the eXtended Reality for Industrial and Occupational Supports, this paper describes the design of an immersive flight-navigation task that assesses SA and SpA, called Assessing Spatial Abilities in Naval Aviation (ASANA). We report user-experience outcomes from 106 U.S. Navy students, showing moderate workload, positive valence, near-neutral arousal, and slightly positive dominance. Simulator sickness increased from pre- to post-exposure, but post-exposure medians remained low, indicating generally mild symptoms. The correlation results showed that ASANA navigation efficiency aligned with established desktop-based SpA metrics. In addition, higher freeze-probe SA accuracy was associated with more efficient performance on an embedded, SME-informed in-scenario SA metric. Together, these findings support ASANA as a tolerable, interpretable VR platform for studying SpA and SA in ecologically-grounded context, and motivate future work that leverages synchronized multimodal sensing to model SA dynamics and SpA–SA interactions.
准确评估态势感知(SA)和空间能力(SpA)在航空领域至关重要,但态势感知工具经常会中断任务或提供有限的时间分辨率,而态势感知措施往往依赖于静态二维刺激,生态效度较低。这些限制需要在现实的动态环境中捕捉这两种能力的方法。虚拟现实(VR)通过实现身临其境的3D导航,同时记录性能和多模式数据,提供了这种功能。然而,基于vr的评估必须考虑用户体验因素,如工作量、影响和模拟器不适,这些因素会影响评估结果的性能和可解释性。在IEEE VR的工业和职业支持扩展现实研讨会上提出的初步研究的基础上,本文描述了评估SA和SpA的沉浸式飞行导航任务的设计,称为评估海军航空中的空间能力(ASANA)。我们报告了106名美国海军学生的用户体验结果,表现出适度的工作量、积极的效价、接近中性的唤醒和轻微的积极支配。从接触前到接触后,模拟器疾病有所增加,但接触后的中位数仍然很低,表明症状一般较轻。相关结果显示,ASANA导航效率与基于桌面的SpA指标一致。此外,更高的冷冻探针SA精度与嵌入的、sme通知的场景SA度量的更有效性能相关。总之,这些发现支持ASANA作为一个可容忍的、可解释的VR平台,用于研究生态基础背景下的SpA和SA,并激励未来利用同步多模态感知来模拟SA动态和SpA - SA相互作用的工作。
{"title":"From design to user experience: The creation and assessment of ASANA, an immersive VR task for situation awareness and spatial ability","authors":"Allison Bayro ,&nbsp;Shannon P.D. McGarry ,&nbsp;Rebecca NeSmith ,&nbsp;Joseph T. Coyne ,&nbsp;Heejin Jeong","doi":"10.1016/j.cag.2026.104541","DOIUrl":"10.1016/j.cag.2026.104541","url":null,"abstract":"<div><div>Accurate assessment of situation awareness (SA) and spatial ability (SpA) is critical in aviation, yet SA tools often interrupt tasks or offer limited temporal resolution, while SpA measures frequently rely on static 2D stimuli with low ecological validity. These limitations call for approaches that capture both abilities in realistic, dynamic contexts. Virtual reality (VR) offers this capability by enabling immersive 3D navigation while simultaneously recording performance and multimodal data. However, VR-based assessments must consider user-experience factors such as workload, affect, and simulator sickness, which can influence performance and the interpretability of assessment outcomes. Building on the preliminary study presented at IEEE VR’s Workshop on the eXtended Reality for Industrial and Occupational Supports, this paper describes the design of an immersive flight-navigation task that assesses SA and SpA, called Assessing Spatial Abilities in Naval Aviation (<em>ASANA</em>). We report user-experience outcomes from 106 U.S. Navy students, showing moderate workload, positive valence, near-neutral arousal, and slightly positive dominance. Simulator sickness increased from pre- to post-exposure, but post-exposure medians remained low, indicating generally mild symptoms. The correlation results showed that <em>ASANA</em> navigation efficiency aligned with established desktop-based SpA metrics. In addition, higher freeze-probe SA accuracy was associated with more efficient performance on an embedded, SME-informed in-scenario SA metric. Together, these findings support <em>ASANA</em> as a tolerable, interpretable VR platform for studying SpA and SA in ecologically-grounded context, and motivate future work that leverages synchronized multimodal sensing to model SA dynamics and SpA–SA interactions.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104541"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Single pass Poisson disk sampling via circle packing 通过圆形包装进行单道泊松圆盘取样
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-04-01 Epub Date: 2026-02-16 DOI: 10.1016/j.cag.2026.104548
Jun Cui , Zeyu Li , Yuxiao Li , Ziheng Guo , Ziming Dai , Jiawan Zhang
Existing Poisson-disk sampling methods struggle to simultaneously preserve Poisson-disk properties in a controllable and unified framework. We therefore propose a spatial covering model based on constrained cells. This model maintains both minimum distance and maximal coverage properties within each cell, and meticulously constructs them in an consequent and single-pass manner, while allowing for flexible control over sample density and a smooth trade-off between noise and aliasing of blue noise distribution. Under the guidance of the geometric model, we propose a simple Poisson-disk sampling method via circle packing to generate high-quality samples with extreme efficiency. The initial sampling in our method yields a distribution with extremely high spatial coverage, allowing the extraction of gap primitives to be disregarded in scenarios that do not need to satisfy the maximal coverage strictly. We extend our method to adaptive sampling of arbitrary density functions in linear time. Experimental results demonstrate our method’s efficiency and its ability to generate blue noise samples compared to the state-of-the-art approaches. Application results are presented in image stippling and surface remeshing.
现有的泊松盘采样方法难以在可控和统一的框架下同时保持泊松盘的性质。因此,我们提出了一种基于约束单元的空间覆盖模型。该模型在每个单元内保持最小距离和最大覆盖属性,并以后续和单遍的方式精心构建它们,同时允许灵活控制样本密度,并在噪声和蓝噪声分布的混叠之间进行平滑权衡。在几何模型的指导下,我们提出了一种简单的通过圆填充的泊松盘采样方法,以极高的效率生成高质量的样本。我们方法中的初始采样产生具有极高空间覆盖率的分布,允许在不需要严格满足最大覆盖率的场景中忽略间隙原语的提取。我们将该方法扩展到线性时间内任意密度函数的自适应采样。实验结果表明,与现有方法相比,该方法具有较好的效率和生成蓝噪声样本的能力。给出了在图像点画和表面网格划分方面的应用结果。
{"title":"Single pass Poisson disk sampling via circle packing","authors":"Jun Cui ,&nbsp;Zeyu Li ,&nbsp;Yuxiao Li ,&nbsp;Ziheng Guo ,&nbsp;Ziming Dai ,&nbsp;Jiawan Zhang","doi":"10.1016/j.cag.2026.104548","DOIUrl":"10.1016/j.cag.2026.104548","url":null,"abstract":"<div><div>Existing Poisson-disk sampling methods struggle to simultaneously preserve Poisson-disk properties in a controllable and unified framework. We therefore propose a spatial covering model based on constrained cells. This model maintains both minimum distance and maximal coverage properties within each cell, and meticulously constructs them in an consequent and single-pass manner, while allowing for flexible control over sample density and a smooth trade-off between noise and aliasing of blue noise distribution. Under the guidance of the geometric model, we propose a simple Poisson-disk sampling method via circle packing to generate high-quality samples with extreme efficiency. The initial sampling in our method yields a distribution with extremely high spatial coverage, allowing the extraction of gap primitives to be disregarded in scenarios that do not need to satisfy the maximal coverage strictly. We extend our method to adaptive sampling of arbitrary density functions in linear time. Experimental results demonstrate our method’s efficiency and its ability to generate blue noise samples compared to the state-of-the-art approaches. Application results are presented in image stippling and surface remeshing.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104548"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Autoencoder-based regularization methods for parametric and inverse projections 基于自编码器的正则化方法的参数和逆投影
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-04-01 Epub Date: 2026-02-17 DOI: 10.1016/j.cag.2026.104552
Frederik L. Dennig , Daniela Blumberg , Nina Geyer , Yannick Metz
Neural networks are used to create parametric and invertible multidimensional data projections. In this context, parametric projections enable the embedding of previously unseen data points without requiring a complete recomputation of the projection, while invertible projections allow for the reconstruction or generation of data in the original space. In this paper, we investigate the use of autoencoder (AE) architectures for simultaneously learning parametric and inverse mappings independent of the underlying dimensionality reduction method. We introduce and compare three regularization methods for autoencoder architectures designed to learn a forward mapping into two-dimensional space induced by the projection as well as inverse mappings back into the original feature space. To evaluate their performance, we conduct a systematic study on six datasets of varying dimensionality and structural complexity, using the established projection techniques t-SNE and UMAP as training targets. Our evaluation combines both quantitative metrics and qualitative assessments. The results demonstrate that AEs, particularly when trained with Kullback–Leibler divergence regularization, can achieve high-quality reconstructions while providing users with control over the degree of smoothing in the projection. Compared to disjoint neural networks, AE architectures yield superior generative capabilities for out-of-distribution samples, while still providing comparable reconstruction quality and parametric projection accuracy. This highlights their potential for interactive data generation in use cases such as classifier evaluation and counterfactual creation.
神经网络用于创建参数化和可逆的多维数据投影。在这种情况下,参数投影可以嵌入以前看不见的数据点,而不需要对投影进行完全的重新计算,而可逆投影允许在原始空间中重建或生成数据。在本文中,我们研究了使用自动编码器(AE)架构来同时学习独立于底层降维方法的参数和逆映射。我们介绍并比较了三种用于自编码器架构的正则化方法,这些自编码器架构旨在学习由投影引起的二维空间的正向映射以及返回原始特征空间的逆映射。为了评估它们的性能,我们对6个不同维度和结构复杂性的数据集进行了系统研究,使用已建立的投影技术t-SNE和UMAP作为训练目标。我们的评估结合了定量指标和定性评估。结果表明,AEs,特别是当使用Kullback-Leibler散度正则化训练时,可以实现高质量的重建,同时为用户提供对投影平滑程度的控制。与不相交神经网络相比,声发射体系结构对分布外样本产生了优越的生成能力,同时仍然提供了相当的重建质量和参数投影精度。这突出了它们在用例中交互式数据生成的潜力,例如分类器评估和反事实创建。
{"title":"Autoencoder-based regularization methods for parametric and inverse projections","authors":"Frederik L. Dennig ,&nbsp;Daniela Blumberg ,&nbsp;Nina Geyer ,&nbsp;Yannick Metz","doi":"10.1016/j.cag.2026.104552","DOIUrl":"10.1016/j.cag.2026.104552","url":null,"abstract":"<div><div>Neural networks are used to create parametric and invertible multidimensional data projections. In this context, parametric projections enable the embedding of previously unseen data points without requiring a complete recomputation of the projection, while invertible projections allow for the reconstruction or generation of data in the original space. In this paper, we investigate the use of autoencoder (AE) architectures for simultaneously learning parametric and inverse mappings independent of the underlying dimensionality reduction method. We introduce and compare three regularization methods for autoencoder architectures designed to learn a forward mapping into two-dimensional space induced by the projection as well as inverse mappings back into the original feature space. To evaluate their performance, we conduct a systematic study on six datasets of varying dimensionality and structural complexity, using the established projection techniques t-SNE and UMAP as training targets. Our evaluation combines both quantitative metrics and qualitative assessments. The results demonstrate that AEs, particularly when trained with Kullback–Leibler divergence regularization, can achieve high-quality reconstructions while providing users with control over the degree of smoothing in the projection. Compared to disjoint neural networks, AE architectures yield superior generative capabilities for out-of-distribution samples, while still providing comparable reconstruction quality and parametric projection accuracy. This highlights their potential for interactive data generation in use cases such as classifier evaluation and counterfactual creation.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104552"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foreword to the special section on Spanish Computer Graphics Conference 2025 前言西班牙计算机图形会议的特别部分2025
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-04-01 Epub Date: 2026-02-07 DOI: 10.1016/j.cag.2026.104543
Ana Serrano , Oscar Argudo , Olatz Iparraguirre
{"title":"Foreword to the special section on Spanish Computer Graphics Conference 2025","authors":"Ana Serrano ,&nbsp;Oscar Argudo ,&nbsp;Olatz Iparraguirre","doi":"10.1016/j.cag.2026.104543","DOIUrl":"10.1016/j.cag.2026.104543","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"135 ","pages":"Article 104543"},"PeriodicalIF":2.8,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient semantic-aware texture optimization for 3D scene reconstruction 面向3D场景重建的高效语义感知纹理优化
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-01 Epub Date: 2026-01-05 DOI: 10.1016/j.cag.2025.104529
Xiaoqun Wu, Tian Yang, Liu Yu, Jian Cao, Huiling Si
To address the issue of blurry artifacts in texture mapping for 3D reconstruction, we propose an innovative approach that optimizes textures based on semantic-aware similarity. Unlike previous algorithms that require significant computational costs, our method introduces a novel metric that provides a more efficient solution for texture mapping. This allows for high-quality texture mapping in 3D reconstructions using multi-view captured images. Our approach begins by establishing mapping within the image sequence using the available 3D information. We then quantitatively assess pixel similarity using our proposed semantic-aware metric, which guides the texture image generation process. By leveraging semantic-aware similarity, we constrain texture mapping and enhance texture clarity. Finally, the texture image is projected onto the geometry to produce a 3D textured mesh. Experimental results conclusively demonstrate that our method can generate 3D meshes with crisp, high-fidelity textures faster than existing methods, even in scenarios involving substantial camera pose errors and low-precision reconstruction geometry.
为了解决三维重建中纹理映射中的模糊工件问题,我们提出了一种基于语义感知相似性的纹理优化方法。与以往需要大量计算成本的算法不同,我们的方法引入了一种新的度量,为纹理映射提供了更有效的解决方案。这允许使用多视图捕获的图像在3D重建中进行高质量的纹理映射。我们的方法首先使用可用的3D信息在图像序列中建立映射。然后,我们使用我们提出的语义感知度量定量评估像素相似性,该度量指导纹理图像生成过程。通过利用语义感知相似度,我们约束了纹理映射,增强了纹理清晰度。最后,将纹理图像投影到几何体上,生成三维纹理网格。实验结果表明,该方法可以比现有方法更快地生成具有清晰,高保真纹理的3D网格,即使在涉及大量相机姿态误差和低精度重建几何的情况下也是如此。
{"title":"Efficient semantic-aware texture optimization for 3D scene reconstruction","authors":"Xiaoqun Wu,&nbsp;Tian Yang,&nbsp;Liu Yu,&nbsp;Jian Cao,&nbsp;Huiling Si","doi":"10.1016/j.cag.2025.104529","DOIUrl":"10.1016/j.cag.2025.104529","url":null,"abstract":"<div><div>To address the issue of blurry artifacts in texture mapping for 3D reconstruction, we propose an innovative approach that optimizes textures based on semantic-aware similarity. Unlike previous algorithms that require significant computational costs, our method introduces a novel metric that provides a more efficient solution for texture mapping. This allows for high-quality texture mapping in 3D reconstructions using multi-view captured images. Our approach begins by establishing mapping within the image sequence using the available 3D information. We then quantitatively assess pixel similarity using our proposed semantic-aware metric, which guides the texture image generation process. By leveraging semantic-aware similarity, we constrain texture mapping and enhance texture clarity. Finally, the texture image is projected onto the geometry to produce a 3D textured mesh. Experimental results conclusively demonstrate that our method can generate 3D meshes with crisp, high-fidelity textures faster than existing methods, even in scenarios involving substantial camera pose errors and low-precision reconstruction geometry.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104529"},"PeriodicalIF":2.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145938375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Preface to the Special Section: ACM MIG 2024 特别部分序言:ACM米格2024
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-01 Epub Date: 2026-01-08 DOI: 10.1016/j.cag.2026.104531
Soraia Raupp Musse, Sheldon Andrews
{"title":"Preface to the Special Section: ACM MIG 2024","authors":"Soraia Raupp Musse,&nbsp;Sheldon Andrews","doi":"10.1016/j.cag.2026.104531","DOIUrl":"10.1016/j.cag.2026.104531","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104531"},"PeriodicalIF":2.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers & Graphics-Uk
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1