首页 > 最新文献

Computer Graphics Forum最新文献

英文 中文
Lightweight Voronoi Sponza 轻量级Voronoi Sponza
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-02-27 DOI: 10.1111/cgf.70003
{"title":"Lightweight Voronoi Sponza","authors":"","doi":"10.1111/cgf.70003","DOIUrl":"https://doi.org/10.1111/cgf.70003","url":null,"abstract":"","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GeoCode: Interpretable Shape Programs GeoCode:可解释的形状程序
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-02-12 DOI: 10.1111/cgf.15276
Ofek Pearl, Itai Lang, Yuhua Hu, Raymond A. Yeh, Rana Hanocka

The task of crafting procedural programs capable of generating structurally valid 3D shapes easily and intuitively remains an elusive goal in computer vision and graphics. Within the graphics community, generating procedural 3D models has shifted to using node graph systems. They allow the artist to create complex shapes and animations through visual programming. Being a high-level design tool, they made procedural 3D modelling more accessible. However, crafting those node graphs demands expertise and training. We present GeoCode, a novel framework designed to extend an existing node graph system and significantly lower the bar for the creation of new procedural 3D shape programs. Our approach meticulously balances expressiveness and generalization for part-based shapes. We propose a curated set of new geometric building blocks that are expressive and reusable across domains. We showcase three innovative and expressive programs developed through our technique and geometric building blocks. Our programs enforce intricate rules, empowering users to execute intuitive high-level parameter edits that seamlessly propagate throughout the entire shape at a lower level while maintaining its validity. To evaluate the user-friendliness of our geometric building blocks among non-experts, we conduct a user study that demonstrates their ease of use and highlights their applicability across diverse domains. Empirical evidence shows the superior accuracy of GeoCode in inferring and recovering 3D shapes compared to an existing competitor. Furthermore, our method demonstrates superior expressiveness compared to alternatives that utilize coarse primitives. Notably, we illustrate the ability to execute controllable local and global shape manipulations. Our code, programs, datasets and Blender add-on are available at https://github.com/threedle/GeoCode.

在计算机视觉和图形学中,制作能够轻松直观地生成结构有效的3D形状的程序程序的任务仍然是一个难以捉摸的目标。在图形界,生成程序3D模型已经转向使用节点图系统。它们允许艺术家通过视觉编程创造复杂的形状和动画。作为一种高级设计工具,它们使程序性3D建模更容易获得。然而,制作这些节点图需要专业知识和培训。我们提出了GeoCode,这是一个新的框架,旨在扩展现有的节点图系统,并大大降低了创建新的程序性3D形状程序的门槛。我们的方法一丝不苟地平衡了基于零件形状的表现力和泛化性。我们提出了一套精心设计的新的几何构建块,这些构建块具有跨领域的表现力和可重用性。我们展示了通过我们的技术和几何建筑模块开发的三个创新和富有表现力的项目。我们的程序执行复杂的规则,使用户能够执行直观的高级参数编辑,在保持其有效性的同时,在较低的水平上无缝地传播到整个形状。为了在非专家中评估我们的几何构建块的用户友好性,我们进行了一项用户研究,展示了它们的易用性,并强调了它们在不同领域的适用性。经验证据表明,与现有的竞争对手相比,GeoCode在推断和恢复3D形状方面具有更高的准确性。此外,与使用粗原语的替代方法相比,我们的方法具有更好的表达能力。值得注意的是,我们说明了执行可控的局部和全局形状操作的能力。我们的代码,程序,数据集和搅拌机插件可在https://github.com/threedle/GeoCode。
{"title":"GeoCode: Interpretable Shape Programs","authors":"Ofek Pearl,&nbsp;Itai Lang,&nbsp;Yuhua Hu,&nbsp;Raymond A. Yeh,&nbsp;Rana Hanocka","doi":"10.1111/cgf.15276","DOIUrl":"https://doi.org/10.1111/cgf.15276","url":null,"abstract":"<p>The task of crafting procedural programs capable of generating structurally valid 3D shapes easily and intuitively remains an elusive goal in computer vision and graphics. Within the graphics community, generating procedural 3D models has shifted to using node graph systems. They allow the artist to create complex shapes and animations through visual programming. Being a high-level design tool, they made procedural 3D modelling more accessible. However, crafting those node graphs demands expertise and training. We present GeoCode, a novel framework designed to extend an existing node graph system and significantly lower the bar for the creation of new procedural 3D shape programs. Our approach meticulously balances expressiveness and generalization for part-based shapes. We propose a curated set of new geometric building blocks that are expressive and reusable across domains. We showcase three innovative and expressive programs developed through our technique and geometric building blocks. Our programs enforce intricate rules, empowering users to execute intuitive high-level parameter edits that seamlessly propagate throughout the entire shape at a lower level while maintaining its validity. To evaluate the user-friendliness of our geometric building blocks among non-experts, we conduct a user study that demonstrates their ease of use and highlights their applicability across diverse domains. Empirical evidence shows the superior accuracy of GeoCode in inferring and recovering 3D shapes compared to an existing competitor. Furthermore, our method demonstrates superior expressiveness compared to alternatives that utilize coarse primitives. Notably, we illustrate the ability to execute controllable local and global shape manipulations. Our code, programs, datasets and Blender add-on are available at https://github.com/threedle/GeoCode.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15276","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Immersive and Interactive Learning With eDIVE: A Solution for Creating Collaborative VR Education Experiences 沉浸式互动学习与eDIVE:创建协作式VR教育体验的解决方案
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-02-10 DOI: 10.1111/cgf.70001
Vojtěch Brůža, Alžběta Šašinková, Čeněk Šašinka, Zdeněk Stachoň, Barbora Kozlíková, Jiří Chmelík

Virtual reality (VR) technology has become increasingly popular in education as a tool for enhancing learning experiences and engagement. This paper addresses the lack of a suitable tool for creating multi-user immersive educational content for virtual environments by introducing a novel solution called eDIVE. The solution is designed to facilitate the development of collaborative immersive educational VR experiences. Developed in close collaboration with psychologists and educators, it addresses specific functional needs identified by these professionals. eDIVE allows creators to extensively modify, expand or develop entirely new VR experiences. eDIVE ultimately makes collaborative VR education more accessible and inclusive for all stakeholders. Its utility is demonstrated through exemplary learning scenarios, developed in collaboration with experienced educators, and evaluated through real-world user studies.

虚拟现实(VR)技术作为一种增强学习体验和参与的工具在教育中越来越受欢迎。本文通过引入一种名为eDIVE的新颖解决方案,解决了缺乏合适的工具来为虚拟环境创建多用户沉浸式教育内容的问题。该解决方案旨在促进协作式沉浸式教育VR体验的发展。它是与心理学家和教育工作者密切合作开发的,解决了这些专业人士确定的特定功能需求。eDIVE允许创作者广泛修改、扩展或开发全新的VR体验。eDIVE最终使协作式VR教育对所有利益相关者来说更容易获得和包容。它的实用性通过示范学习场景来展示,与经验丰富的教育工作者合作开发,并通过实际用户研究进行评估。
{"title":"Immersive and Interactive Learning With eDIVE: A Solution for Creating Collaborative VR Education Experiences","authors":"Vojtěch Brůža,&nbsp;Alžběta Šašinková,&nbsp;Čeněk Šašinka,&nbsp;Zdeněk Stachoň,&nbsp;Barbora Kozlíková,&nbsp;Jiří Chmelík","doi":"10.1111/cgf.70001","DOIUrl":"https://doi.org/10.1111/cgf.70001","url":null,"abstract":"<p>Virtual reality (VR) technology has become increasingly popular in education as a tool for enhancing learning experiences and engagement. This paper addresses the lack of a suitable tool for creating multi-user immersive educational content for virtual environments by introducing a novel solution called eDIVE. The solution is designed to facilitate the development of collaborative immersive educational VR experiences. Developed in close collaboration with psychologists and educators, it addresses specific functional needs identified by these professionals. eDIVE allows creators to extensively modify, expand or develop entirely new VR experiences. eDIVE ultimately makes collaborative VR education more accessible and inclusive for all stakeholders. Its utility is demonstrated through exemplary learning scenarios, developed in collaboration with experienced educators, and evaluated through real-world user studies.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeepFracture: A Generative Approach for Predicting Brittle Fractures with Neural Discrete Representation Learning 深度断裂:基于神经离散表示学习的脆性断裂预测生成方法
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-02-07 DOI: 10.1111/cgf.70002
Yuhang Huang, Takashi Kanai

In the field of brittle fracture animation, generating realistic destruction animations using physics-based simulation methods is computationally expensive. While techniques based on Voronoi diagrams or pre-fractured patterns are effective for real-time applications, they fail to incorporate collision conditions when determining fractured shapes during runtime. This paper introduces a novel learning-based approach for predicting fractured shapes based on collision dynamics at runtime. Our approach seamlessly integrates realistic brittle fracture animations with rigid body simulations, utilising boundary element method (BEM) brittle fracture simulations to generate training data. To integrate collision scenarios and fractured shapes into a deep learning framework, we introduce generative geometric segmentation, distinct from both instance and semantic segmentation, to represent 3D fragment shapes. We propose an eight-dimensional latent code to address the challenge of optimising multiple discrete fracture pattern targets that share similar continuous collision latent codes. This code will follow a discrete normal distribution corresponding to a specific fracture pattern within our latent impulse representation design. This adaptation enables the prediction of fractured shapes using neural discrete representation learning. Our experimental results show that our approach generates considerably more detailed brittle fractures than existing techniques, while the computational time is typically reduced compared to traditional simulation methods at comparable resolutions.

在脆性断裂动画领域,使用基于物理的仿真方法生成逼真的破坏动画在计算上是非常昂贵的。虽然基于Voronoi图或预裂缝模式的技术在实时应用中是有效的,但在运行时确定裂缝形状时,它们无法考虑碰撞条件。本文介绍了一种基于碰撞动力学的基于学习的断裂形状预测方法。我们的方法将真实的脆性断裂动画与刚体模拟无缝集成,利用边界元法(BEM)脆性断裂模拟来生成训练数据。为了将碰撞场景和断裂形状集成到深度学习框架中,我们引入了不同于实例和语义分割的生成几何分割来表示3D碎片形状。我们提出了一个八维潜码来解决优化多个离散断裂模式目标的挑战,这些目标具有相似的连续碰撞潜码。这个代码将遵循一个离散的正态分布,对应于我们的潜在脉冲表示设计中的特定断裂模式。这种适应性使得使用神经离散表示学习预测断裂形状成为可能。我们的实验结果表明,与现有技术相比,我们的方法可以生成更详细的脆性裂缝,而在相同分辨率下,与传统模拟方法相比,计算时间通常会减少。
{"title":"DeepFracture: A Generative Approach for Predicting Brittle Fractures with Neural Discrete Representation Learning","authors":"Yuhang Huang,&nbsp;Takashi Kanai","doi":"10.1111/cgf.70002","DOIUrl":"https://doi.org/10.1111/cgf.70002","url":null,"abstract":"<p>In the field of brittle fracture animation, generating realistic destruction animations using physics-based simulation methods is computationally expensive. While techniques based on Voronoi diagrams or pre-fractured patterns are effective for real-time applications, they fail to incorporate collision conditions when determining fractured shapes during runtime. This paper introduces a novel learning-based approach for predicting fractured shapes based on collision dynamics at runtime. Our approach seamlessly integrates realistic brittle fracture animations with rigid body simulations, utilising boundary element method (BEM) brittle fracture simulations to generate training data. To integrate collision scenarios and fractured shapes into a deep learning framework, we introduce generative geometric segmentation, distinct from both instance and semantic segmentation, to represent 3D fragment shapes. We propose an eight-dimensional latent code to address the challenge of optimising multiple discrete fracture pattern targets that share similar continuous collision latent codes. This code will follow a discrete normal distribution corresponding to a specific fracture pattern within our latent impulse representation design. This adaptation enables the prediction of fractured shapes using neural discrete representation learning. Our experimental results show that our approach generates considerably more detailed brittle fractures than existing techniques, while the computational time is typically reduced compared to traditional simulation methods at comparable resolutions.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MANDALA—Visual Exploration of Anomalies in Industrial Multivariate Time Series Data mandala -工业多变量时间序列数据异常的可视化探索
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-02-06 DOI: 10.1111/cgf.70000
J. Suschnigg, B. Mutlu, G. Koutroulis, H. Hussain, T. Schreck

The detection, description and understanding of anomalies in multivariate time series data is an important task in several industrial domains. Automated data analysis provides many tools and algorithms to detect anomalies, while visual interfaces enable domain experts to explore and analyze data interactively to gain insights using their expertise. Anomalies in multivariate time series can be diverse with respect to the dimensions, temporal occurrence and length within a dataset. Their detection and description depend on the analyst's domain, task and background knowledge. Therefore, anomaly analysis is often an underspecified problem. We propose a visual analytics tool called MANDALA (Multivariate ANomaly Detection And expLorAtion), which uses kernel density estimation to detect anomalies and provides users with visual means to explore and explain them. To assess our algorithm's effectiveness, we evaluate its ability to identify different types of anomalies using a synthetic dataset generated with the GutenTAG anomaly and time series generator. Our approach allows users to define normal data interactively first. Next, they can explore anomaly candidates, their related dimensions and their temporal scope. Our carefully designed visual analytics components include a tailored scatterplot matrix with semantic zooming features that visualize normal data through hexagonal binning plots and overlay candidate anomaly data as scatterplots. In addition, the system supports the analysis on a broader scope involving all dimensions simultaneously or on a smaller scope involving dimension pairs only. We define a taxonomy of important types of anomaly patterns, which can guide the interactive analysis process. The effectiveness of our system is demonstrated through a use case scenario on industrial data conducted with domain experts from the automotive domain and a user study utilizing a public dataset from the aviation domain.

多变量时间序列数据异常的检测、描述和理解是许多工业领域的重要任务。自动化数据分析提供了许多工具和算法来检测异常,而可视化界面使领域专家能够交互式地探索和分析数据,以利用他们的专业知识获得见解。多变量时间序列中的异常在数据集中的维度、时间发生和长度方面可能是多种多样的。它们的检测和描述取决于分析人员的领域、任务和背景知识。因此,异常分析通常是一个未明确的问题。我们提出了一种名为MANDALA(多元异常检测和探索)的可视化分析工具,它使用核密度估计来检测异常,并为用户提供可视化的方法来探索和解释它们。为了评估我们的算法的有效性,我们使用由GutenTAG异常和时间序列生成器生成的合成数据集来评估其识别不同类型异常的能力。我们的方法允许用户首先交互式地定义正常数据。接下来,他们可以探索异常候选者,它们的相关维度和它们的时间范围。我们精心设计的可视化分析组件包括一个定制的散点图矩阵,具有语义缩放功能,通过六边形分形图可视化正常数据,并将候选异常数据覆盖为散点图。此外,该系统还支持同时涉及所有维度的更广泛范围的分析或仅涉及维度对的较小范围的分析。我们定义了重要异常模式类型的分类,它可以指导交互分析过程。我们的系统的有效性通过与汽车领域的领域专家进行的工业数据用例场景和利用航空领域的公共数据集进行的用户研究来证明。
{"title":"MANDALA—Visual Exploration of Anomalies in Industrial Multivariate Time Series Data","authors":"J. Suschnigg,&nbsp;B. Mutlu,&nbsp;G. Koutroulis,&nbsp;H. Hussain,&nbsp;T. Schreck","doi":"10.1111/cgf.70000","DOIUrl":"https://doi.org/10.1111/cgf.70000","url":null,"abstract":"<p>The detection, description and understanding of anomalies in multivariate time series data is an important task in several industrial domains. Automated data analysis provides many tools and algorithms to detect anomalies, while visual interfaces enable domain experts to explore and analyze data interactively to gain insights using their expertise. Anomalies in multivariate time series can be diverse with respect to the dimensions, temporal occurrence and length within a dataset. Their detection and description depend on the analyst's domain, task and background knowledge. Therefore, anomaly analysis is often an underspecified problem. We propose a visual analytics tool called MANDALA (<b>M</b>ultivariate <b>AN</b>omaly <b>D</b>etection <b>A</b>nd exp<b>L</b>or<b>A</b>tion), which uses kernel density estimation to detect anomalies and provides users with visual means to explore and explain them. To assess our algorithm's effectiveness, we evaluate its ability to identify different types of anomalies using a synthetic dataset generated with the GutenTAG anomaly and time series generator. Our approach allows users to define normal data interactively first. Next, they can explore anomaly candidates, their related dimensions and their temporal scope. Our carefully designed visual analytics components include a tailored scatterplot matrix with semantic zooming features that visualize normal data through hexagonal binning plots and overlay candidate anomaly data as scatterplots. In addition, the system supports the analysis on a broader scope involving all dimensions simultaneously or on a smaller scope involving dimension pairs only. We define a taxonomy of important types of anomaly patterns, which can guide the interactive analysis process. The effectiveness of our system is demonstrated through a use case scenario on industrial data conducted with domain experts from the automotive domain and a user study utilizing a public dataset from the aviation domain.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70000","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Texture-Free Practical Model for Realistic Surface-Based Rendering of Woven Fabrics 一种无纹理实用模型用于机织织物的真实感表面渲染
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-02-02 DOI: 10.1111/cgf.15283
Apoorv Khattar, Junqiu Zhu, Ling-Qi Yan, Zahra Montazeri

Rendering woven fabrics is challenging due to the complex micro geometry and anisotropy appearance. Conventional solutions either fully model every yarn/ply/fibre for high fidelity at a high computational cost, or ignore details, that produce non-realistic close-up renderings. In this paper, we introduce a model that shares the advantages of both. Our model requires only binary patterns as input yet offers all the necessary micro-level details by adding the yarn/ply/fibre implicitly. Moreover, we design a double-layer representation to handle light transmission accurately and use a constant timed () approach to accurately and efficiently depict parallax and shadowing-masking effects in a tandem way. We compare our model with curve-based and surface-based, on different patterns, under different lighting and evaluate with photographs to ensure capturing the aforementioned realistic effects.

由于复杂的微观几何和各向异性的外观,绘制机织织物具有挑战性。传统的解决方案要么以高计算成本对每个纱线/股/纤维进行高保真度的完全建模,要么忽略细节,从而产生不真实的特写渲染。在本文中,我们引入了一个共享两者优点的模型。我们的模型只需要二进制模式作为输入,但通过隐式地添加纱线/股线/纤维,提供了所有必要的微观细节。此外,我们设计了一种双层表示来准确地处理光传输,并使用恒定时间()方法来准确有效地描述视差和阴影掩蔽效果。我们将我们的模型与基于曲线和基于表面的模型进行比较,在不同的模式下,在不同的照明下,并与照片进行评估,以确保捕捉到上述逼真的效果。
{"title":"A Texture-Free Practical Model for Realistic Surface-Based Rendering of Woven Fabrics","authors":"Apoorv Khattar,&nbsp;Junqiu Zhu,&nbsp;Ling-Qi Yan,&nbsp;Zahra Montazeri","doi":"10.1111/cgf.15283","DOIUrl":"https://doi.org/10.1111/cgf.15283","url":null,"abstract":"<p>Rendering woven fabrics is challenging due to the complex micro geometry and anisotropy appearance. Conventional solutions either fully model every yarn/ply/fibre for high fidelity at a high computational cost, or ignore details, that produce non-realistic close-up renderings. In this paper, we introduce a model that shares the advantages of both. Our model requires only binary patterns as input yet offers all the necessary micro-level details by adding the yarn/ply/fibre implicitly. Moreover, we design a double-layer representation to handle light transmission accurately and use a constant timed (<span></span><math></math>) approach to accurately and efficiently depict parallax and shadowing-masking effects in a tandem way. We compare our model with curve-based and surface-based, on different patterns, under different lighting and evaluate with photographs to ensure capturing the aforementioned realistic effects.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15283","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MetapathVis: Inspecting the Effect of Metapath in Heterogeneous Network Embedding via Visual Analytics 元路径:用可视化分析方法检测元路径在异构网络嵌入中的效果
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-01-31 DOI: 10.1111/cgf.15285
Quan Li, Yun Tian, Xiyuan Wang, Laixin Xie, Dandan Lin, Lingling Yi, Xiaojuan Ma

In heterogeneous graphs (HGs), which offer richer network and semantic insights compared to homogeneous graphs, the Metapath technique serves as an essential tool for data mining. This technique facilitates the specification of sequences of entity connections, elucidating the semantic composite relationships between various node types for a range of downstream tasks. Nevertheless, selecting the most appropriate metapath from a pool of candidates and assessing its impact presents significant challenges. To address this issue, our study introduces MetapathVis, an interactive visual analytics system designed to assist machine learning (ML) practitioners in comprehensively understanding and comparing the effects of metapaths from multiple fine-grained perspectives. MetapathVis allows for an in-depth evaluation of various models generated with different metapaths, aligning HG network information at the individual level with model metrics. It also facilitates the tracking of aggregation processes associated with different metapaths. The effectiveness of our approach is validated through three case studies and a user study, with feedback from domain experts confirming that our system significantly aids ML practitioners in evaluating and comprehending the viability of different metapath designs.

与同构图相比,异构图提供了更丰富的网络和语义洞察,在异构图中,Metapath技术是数据挖掘的基本工具。该技术有助于规范实体连接序列,阐明一系列下游任务的各种节点类型之间的语义组合关系。然而,从众多候选路径中选择最合适的元路径并评估其影响是一项重大挑战。为了解决这个问题,我们的研究引入了MetapathVis,这是一个交互式可视化分析系统,旨在帮助机器学习(ML)从业者从多个细粒度的角度全面理解和比较元路径的影响。MetapathVis允许对使用不同元路径生成的各种模型进行深入评估,将HG网络信息与模型度量在个体级别上保持一致。它还有助于跟踪与不同元路径相关联的聚合过程。我们的方法的有效性通过三个案例研究和一个用户研究得到了验证,领域专家的反馈证实了我们的系统在评估和理解不同元路径设计的可行性方面显着帮助ML从业者。
{"title":"MetapathVis: Inspecting the Effect of Metapath in Heterogeneous Network Embedding via Visual Analytics","authors":"Quan Li,&nbsp;Yun Tian,&nbsp;Xiyuan Wang,&nbsp;Laixin Xie,&nbsp;Dandan Lin,&nbsp;Lingling Yi,&nbsp;Xiaojuan Ma","doi":"10.1111/cgf.15285","DOIUrl":"https://doi.org/10.1111/cgf.15285","url":null,"abstract":"<p>In heterogeneous graphs (HGs), which offer richer network and semantic insights compared to homogeneous graphs, the <i>Metapath</i> technique serves as an essential tool for data mining. This technique facilitates the specification of sequences of entity connections, elucidating the semantic composite relationships between various node types for a range of downstream tasks. Nevertheless, selecting the most appropriate metapath from a pool of candidates and assessing its impact presents significant challenges. To address this issue, our study introduces <i>MetapathVis</i>, an interactive visual analytics system designed to assist machine learning (ML) practitioners in comprehensively understanding and comparing the effects of metapaths from multiple fine-grained perspectives. <i>MetapathVis</i> allows for an in-depth evaluation of various models generated with different metapaths, aligning HG network information at the individual level with model metrics. It also facilitates the tracking of aggregation processes associated with different metapaths. The effectiveness of our approach is validated through three case studies and a user study, with feedback from domain experts confirming that our system significantly aids ML practitioners in evaluating and comprehending the viability of different metapath designs.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Climbing Controllers for Physics-Based Characters 学习攀登控制器的物理为基础的字符
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-01-30 DOI: 10.1111/cgf.15284
Kyungwon Kang, Taehong Gu, Taesoo Kwon

Despite the growing demand for capturing diverse motions, collecting climbing motion data remains challenging due to difficulties in tracking obscured markers and scanning climbing structures. Additionally, preparing varied routes further adds to the complexities of the data collection process. To address these challenges, this paper introduces a physics-based climbing controller for synthesizing climbing motions. The proposed method consists of two learning stages. In the first stage, a hanging policy is trained to naturally grasp holds. This policy is then used to generate a dataset containing hold positions, postures, and grip states, forming favourable initial poses. In the second stage, a climbing policy is trained using this dataset to perform actual climbing movements. The episode begins in a state close to the reference climbing motion, enabling the exploration of more natural climbing style states. This policy enables the character to reach the target position while utilizing its limbs more evenly. The experiments demonstrate that the proposed method effectively identifies good climbing postures and enhances limb coordination across environments with varying slopes and hold patterns.

尽管捕捉各种运动的需求不断增长,但由于难以跟踪模糊标记和扫描攀爬结构,收集攀爬运动数据仍然具有挑战性。此外,准备不同的路线进一步增加了数据收集过程的复杂性。为了解决这些问题,本文介绍了一种基于物理的攀爬控制器,用于合成攀爬运动。该方法分为两个学习阶段。在第一阶段,一个悬挂策略被训练成自然地抓住点。然后使用该策略生成包含握持位置、姿势和握持状态的数据集,形成有利的初始姿势。在第二阶段,使用该数据集训练攀登策略以执行实际的攀登动作。本章节以接近参考攀爬运动的状态开始,从而探索更自然的攀爬风格状态。此策略使角色能够更均匀地利用四肢到达目标位置。实验结果表明,该方法能有效识别良好的攀爬姿态,并能在不同坡度和握持方式的环境中增强肢体的协调性。
{"title":"Learning Climbing Controllers for Physics-Based Characters","authors":"Kyungwon Kang,&nbsp;Taehong Gu,&nbsp;Taesoo Kwon","doi":"10.1111/cgf.15284","DOIUrl":"https://doi.org/10.1111/cgf.15284","url":null,"abstract":"<p>Despite the growing demand for capturing diverse motions, collecting climbing motion data remains challenging due to difficulties in tracking obscured markers and scanning climbing structures. Additionally, preparing varied routes further adds to the complexities of the data collection process. To address these challenges, this paper introduces a physics-based climbing controller for synthesizing climbing motions. The proposed method consists of two learning stages. In the first stage, a hanging policy is trained to naturally grasp holds. This policy is then used to generate a dataset containing hold positions, postures, and grip states, forming favourable initial poses. In the second stage, a climbing policy is trained using this dataset to perform actual climbing movements. The episode begins in a state close to the reference climbing motion, enabling the exploration of more natural climbing style states. This policy enables the character to reach the target position while utilizing its limbs more evenly. The experiments demonstrate that the proposed method effectively identifies good climbing postures and enhances limb coordination across environments with varying slopes and hold patterns.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Constrained Spectral Uplifting for HDR Environment Maps 约束光谱提升HDR环境地图
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-01-28 DOI: 10.1111/cgf.15280
L. Tódová, A. Wilkie

Spectral representation of assets is an important precondition for achieving physical realism in rendering. However, defining assets by their spectral distribution is complicated and tedious. Therefore, it has become general practice to create RGB assets and convert them into their spectral counterparts prior to rendering. This process is called spectral uplifting. While a multitude of techniques focusing on reflectance uplifting exist, the current state of the art of uplifting emission for image-based lighting consists of simply scaling reflectance uplifts. Although this is usable insofar as the obtained overall scene appearance is not unrealistic, the generated emission spectra are only metamers of the original illumination. This, in turn, can cause deviations from the expected appearance even if the rest of the scene corresponds to real-world data. In a recent publication, we proposed a method capable of uplifting HDR environment maps based on spectral measurements of light sources similar to those present in the maps. To identify the illuminants, we employ an extensive set of emission measurements, and we combine the results with an existing reflectance uplifting method. In addition, we address the problem of environment map capture for the purposes of a spectral rendering pipeline, for which we propose a novel solution. We further extend this work with a detailed evaluation of the method, both in terms of improved colour error and performance.

资产的谱表示是在渲染中实现物理真实感的重要前提。然而,通过谱分布来定义资产是复杂而乏味的。因此,创建RGB资产并在渲染之前将其转换为光谱对应项已成为一般做法。这个过程被称为光谱提升。虽然存在许多聚焦于反射率提升的技术,但目前基于图像的照明的提升发射技术的现状只是简单地缩放反射率提升。虽然在获得的整体场景外观不是不现实的情况下,这是可用的,但生成的发射光谱只是原始照明的变元。反过来,这可能导致与预期外观的偏差,即使场景的其余部分与真实世界的数据相对应。在最近发表的一篇文章中,我们提出了一种方法,能够基于与地图中存在的光源相似的光谱测量来提升HDR环境地图。为了确定光源,我们采用了一套广泛的发射测量,并将结果与现有的反射率提升方法相结合。此外,我们解决了用于光谱渲染管道的环境地图捕获问题,为此我们提出了一种新的解决方案。我们进一步扩展了这项工作,详细评估了该方法,无论是在改善颜色误差和性能方面。
{"title":"Constrained Spectral Uplifting for HDR Environment Maps","authors":"L. Tódová,&nbsp;A. Wilkie","doi":"10.1111/cgf.15280","DOIUrl":"https://doi.org/10.1111/cgf.15280","url":null,"abstract":"<p>Spectral representation of assets is an important precondition for achieving physical realism in rendering. However, defining assets by their spectral distribution is complicated and tedious. Therefore, it has become general practice to create RGB assets and convert them into their spectral counterparts prior to rendering. This process is called <i>spectral uplifting</i>. While a multitude of techniques focusing on reflectance uplifting exist, the current state of the art of uplifting emission for image-based lighting consists of simply scaling reflectance uplifts. Although this is usable insofar as the obtained overall scene appearance is not unrealistic, the generated emission spectra are only metamers of the original illumination. This, in turn, can cause deviations from the expected appearance even if the rest of the scene corresponds to real-world data. In a recent publication, we proposed a method capable of uplifting HDR environment maps based on spectral measurements of light sources similar to those present in the maps. To identify the illuminants, we employ an extensive set of emission measurements, and we combine the results with an existing reflectance uplifting method. In addition, we address the problem of environment map capture for the purposes of a spectral rendering pipeline, for which we propose a novel solution. We further extend this work with a detailed evaluation of the method, both in terms of improved colour error and performance.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Single-Shot Example Terrain Sketching by Graph Neural Networks 图神经网络的单镜头地形草图示例
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-01-25 DOI: 10.1111/cgf.15281
Y. Liu, B. Benes

Terrain generation is a challenging problem. Procedural modelling methods lack control, while machine learning methods often need large training datasets and struggle to preserve the topology information. We propose a method that generates a new terrain from a single image for training and a simple user sketch. Our single-shot method preserves the sketch topology while generating diversified results. Our method is based on a graph neural network (GNN) and builds a detailed relation among the sketch-extracted features, that is, ridges and valleys and their neighbouring area. By disentangling the influence from different sketches, our model generates visually realistic terrains following the user sketch while preserving the features from the real terrains. Experiments are conducted to show both qualitative and quantitative comparisons. The structural similarity index measure of our generated and real terrains is around 0.8 on average.

地形生成是一个具有挑战性的问题。程序建模方法缺乏控制,而机器学习方法通常需要大量的训练数据集,并且难以保留拓扑信息。我们提出了一种从单个训练图像和简单的用户草图生成新地形的方法。我们的单镜头方法在生成多样化结果的同时保留了草图拓扑结构。我们的方法基于图神经网络(GNN),并在草图提取的特征之间建立详细的关系,即山脊和山谷及其邻近区域。通过分离不同草图的影响,我们的模型根据用户草图生成视觉上逼真的地形,同时保留了真实地形的特征。实验进行了定性和定量的比较。我们生成的地形和真实地形的结构相似指数平均在0.8左右。
{"title":"Single-Shot Example Terrain Sketching by Graph Neural Networks","authors":"Y. Liu,&nbsp;B. Benes","doi":"10.1111/cgf.15281","DOIUrl":"https://doi.org/10.1111/cgf.15281","url":null,"abstract":"<p>Terrain generation is a challenging problem. Procedural modelling methods lack control, while machine learning methods often need large training datasets and struggle to preserve the topology information. We propose a method that generates a new terrain from a single image for training and a simple user sketch. Our single-shot method preserves the sketch topology while generating diversified results. Our method is based on a graph neural network (GNN) and builds a detailed relation among the sketch-extracted features, that is, ridges and valleys and their neighbouring area. By disentangling the influence from different sketches, our model generates visually realistic terrains following the user sketch while preserving the features from the real terrains. Experiments are conducted to show both qualitative and quantitative comparisons. The structural similarity index measure of our generated and real terrains is around 0.8 on average.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15281","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Graphics Forum
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1